Administration Guide
General administration for the Trusted Artifact Signer service
Abstract
Preface
Welcome to the Red Hat Trusted Artifact Signer Administration Guide.
This guide can help you with maintenance routines and tasks for Red Hat’s Trusted Artifact Signer (RHTAS) service running on Red Hat platforms. Content organized by your installation platform:
You can find information about deploying the Trusted Artifact Signer service in the Deployment Guide.
Chapter 1. Red Hat OpenShift Container Platform
1.1. Protect your signing data
As a systems administrator, protecting the signing data of your software supply chain is critical when there is data loss due to hardware failure or accidental data deletion.
The OpenShift API Data Protection (OADP) product provides data protection to applications running on Red Hat OpenShift Container Platform. By using the OADP product, this can help us get the software developers back to signing and verifying code as quickly as possible. After installing and configuring the OADP operator you can start backing up and restoring your Red Hat Trusted Artifact Signer (RHTAS) data.
1.1.1. Installing and configuring the OADP Operator
The OpenShift API Data Protection (OADP) Operator gives you the ability to backup OpenShift application resources and internal container images. You can use the OADP Operator to backup and restore your Trusted Artifact Signer data.
This procedure uses Amazon Web Services (AWS) Simple Storage Service (S3) to create a bucket for illustrating how to configure the OADP operator. You can choose to use a different supported This page is not included, but the link has been rewritten to point to the nearest parent document.S3-compatible object storage platform instead of AWS, such as Red Hat OpenShift Data Foundation.
Prerequisites
- Red Hat OpenShift Container Platform 4.16 or later.
-
Access to the OpenShift web console with the
cluster-adminrole. - The ability to create an S3-compatible bucket.
-
A workstation with the
oc, andawsbinaries installed.
Procedure
Open a terminal on your workstation, and log in to OpenShift:
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Create a new bucket:
export BUCKET=NEW_BUCKET_NAME export REGION=AWS_REGION_ID export USER=OADP_USER_NAME aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION
$ export BUCKET=example-bucket-name $ export REGION=us-east-1 $ export USER=velero $ $ aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION
Create a new user:
$ aws iam create-user --user-name $USER
Create a new policy:
$ cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::${BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::${BUCKET}" ] } ] } EOFAssociate this policy to the new user:
$ aws iam put-user-policy \ --user-name $USER \ --policy-name velero \ --policy-document file://velero-policy.json
Create an access key:
$ aws iam create-access-key --user-name $USER --output=json | jq -r '.AccessKey | [ "export AWS_ACCESS_KEY_ID=" + .AccessKeyId, "export AWS_SECRET_ACCESS_KEY=" + .SecretAccessKey ] | join("\n")'Create a credentials file with your AWS secret key information:
$ cat << EOF > ./credentials-velero [default] aws_access_key_id=$AWS_ACCESS_KEY_ID aws_secret_access_key=$AWS_SECRET_ACCESS_KEY EOF
-
Log in to the OpenShift web console with a user that has the
cluster-adminrole. - From the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
- In the search field, type oadp, and click the OADP Operator tile provided by Red Hat.
- Click the Install button to show the operator details.
- Accept the default values, click Install on the Install Operator page, and wait for the installation to finish.
After the operator installation finishes, from your workstation terminal, create a secret resource for OpenShift with your AWS credentials:
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
- From the OpenShift web console, click the View Operator button.
- Click Create instance on the DataProtectionApplication (DPA) tile.
- On the Create DataProtectionApplication page, select YAML view.
Edit the following values in the resource file:
-
Under the
metadatasection, replacevelero-samplewithvelero. -
Under the
spec.configuration.nodeAgentsection, replaceresticwithkopia. -
Under the
spec.configuration.velerosection, addresourceTimeout: 10m. -
Under the
spec.configuration.velero.defaultPluginssection, add- csi. -
Under the
spec.snapshotLocationssection, replace theus-west-2value with your AWS regional value. -
Under the
spec.backupLocationssection, replace theus-east-1value with your AWS regional value. -
Under the
spec.backupLocations.objectStoragesection, replacemy-bucket-namewith your bucket name. Replacevelerowith your bucket prefix name, if you use a different prefix.
-
Under the
- Click the Create button.
1.1.2. Backing up your Trusted Artifact Signer data
With the OpenShift API Data Protection (OADP) operator installed and with an instance deployed, you can create a volume snapshot resource, and a backup resource to backup your Red Hat Trusted Artifact Signer (RHTAS) data.
Prerequisites
- Red Hat OpenShift Container Platform 4.16 or later.
-
Access to the OpenShift web console with the
cluster-adminrole. - Installation of the OADP operator.
-
A workstation with the
ocbinary installed.
Procedure
Open a terminal on your workstation, and log in to OpenShift:
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Find and edit the
VolumeSnapshotClassresource:$ oc get VolumeSnapshotClass -n openshift-adp $ oc edit VolumeSnapshotClass csi-aws-vsc -n openshift-adp
Update the following values in the resource file:
-
Under the
metadata.labelssection, add thevelero.io/csi-volumesnapshot-class: "true"label. - Save your changes, and quit the editor.
-
Under the
Create a one-time, initial
Backupjob resource:$ cat <<EOF | oc apply -f - apiVersion: velero.io/v1 kind: Backup metadata: name: rhtas-backup labels: velero.io/storage-location: velero-1 namespace: openshift-adp spec: hooks: {} includedNamespaces: - trusted-artifact-signer includedResources: [] excludedResources: [] snapshotMoveData: true storageLocation: velero-1 ttl: 720h0m0s EOFBy default, all resources are backed up within the trusted-artifact-signer namespace. You can specify what resources you want to include or exclude by using the includeResources or excludedResources properties respectively.
ImportantDepending on the storage class of the backup target, persistent volumes cannot be actively in-use for the backup to be successful.
Create a new schedule for regular backups to occur:
$ cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: BACKUP_JOB_NAME namespace: openshift-adp spec: schedule: USER_DEFINED_SCHEDULE template: hooks: {} includedNamespaces: - trusted-artifact-signer storageLocation: velero-1 defaultVolumesToFsBackup: true ttl: 720h0m0s EOF
Replace BACKUP_JOB_NAME with a job name, and replace USER_DEFINED_SCHEDULE with a cron-formatted expression for the schedule. For example, using a cron-formatted schedule of
*/10 * * * *, this backs up thetrusted-artifact-signernamespace and its resources every 10 minutes.You can verify if this schedule is enabled, and when the last backup job ran. For example:
$ oc get schedule -n openshift-adp NAME STATUS SCHEDULE LASTBACKUP AGE PAUSED rhtas-backups Enabled 0/10 * * * * 3m11s 16m
1.1.3. Restoring your Trusted Artifact Signer data
With the Red Hat Trusted Artifact Signer (RHTAS) and OpenShift API Data Protection (OADP) operators installed, and a backup resource for RHTAS namespace, you can restore your data to an OpenShift cluster.
Prerequisites
- Red Hat OpenShift Container Platform version 4.16 or later.
-
Access to the OpenShift web console with the
cluster-adminrole. - Installation of the RHTAS Operator.
- Installation of the OADP Operator.
-
A backup resource of the
trusted-artifact-signernamespace structure. -
A workstation with the
ocbinary installed.
Procedure
Disable the RHTAS operator:
$ oc scale deploy rhtas-operator-controller-manager --replicas=0 -n openshift-operators
Create the
Restoreresource:$ cat <<EOF | oc apply -f - apiVersion: velero.io/v1 kind: Restore metadata: name: rhtas-restore namespace: openshift-adp spec: backupName: rhtas-backup includedResources: [] restoreStatus: includedResources: - securesign.rhtas.redhat.com - trillian.rhtas.redhat.com - ctlog.rhtas.redhat.com - fulcio.rhtas.redhat.com - rekor.rhtas.redhat.com - tuf.rhtas.redhat.com - timestampauthority.rhtas.redhat.com excludedResources: - pod - deployment - nodes - route - service - replicaset - events - cronjob - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - pods - deployments restorePVs: true existingResourcePolicy: update EOFIf restoring your RHTAS data to a different OpenShift cluster, do the following steps.
Delete the secret for the Trillian database:
$ oc delete secret securesign-sample-trillian-db-tls $ oc delete pod trillian-db-xxx
NoteThe RHTAS operator recreates the secret and restarts the pod.
-
Run the
restoreOwnerReferences.shscript.
Enable the RHTAS operator:
$ oc scale deploy rhtas-operator-controller-manager --replicas=1 -n openshift-operators
ImportantImmediately starting the RHTAS operator after starting the restore ensures the claim of the persistent volume.
1.2. The Update Framework
As a systems administrator, understanding Red Hat’s implementation of The Update Framework (TUF) for Red Hat Trusted Artifact Signer (RHTAS) is important in helping you to maintaining a secure coding environment for developers. You can refresh TUF’s root and non-root metadata periodically to help prevent mix-and-match attacks on a code base. Refreshing the TUF metadata gives clients the ability to detect and reject outdated or tampered-with files.
1.2.1. Trusted Artifact Signer’s implementation of The Update Framework
Starting with Red Hat Trusted Artifact Signer (RHTAS) version 1.1, we implemented The Update Framework (TUF) as a trust root to store public keys, and certificates used by RHTAS services. The Update Framework is a sophisticated framework for securing software update systems, and this makes it ideal for securing shipped artifacts. The Update Framework refers to the RHTAS services as trusted root targets. There are four trusted targets, one for each RHTAS service: Fulcio, Certificate Transparency (CT) log, Rekor, and Timestamp Authority (TSA). Client software, such as cosign, use the RHTAS trust root targets to sign and verify artifact signatures. A simple HTTP server distributes the public keys and certificates to the client software. This simple HTTP server has the TUF repository of the individual targets.
By default, when deploying RHTAS on Red Hat OpenShift or Red Hat Enterprise Linux, we create a TUF repository, and prepopulate the individual targets. By default, the expiration date of all metadata files is 52 weeks from the time you deploy the RHTAS service. Red Hat recommends choosing shorter expiration periods, and rotating your public keys and certificates often. Doing these maintenance tasks regularly can help prevent attacks on your code base.
1.2.2. Updating The Update Framework metadata files
By default, The Update Framework (TUF) metadata files expire after 52 weeks from the Red Hat Trusted Artifact Signer (RHTAS) deployment date. At a minimum, you have to update the TUF metadata files at least once every 52 weeks before they expire. Red Hat recommends updating the metadata files more often than once a year.
This procedure walks you through refreshing the root, and non-root metadata files.
Prerequisites
- Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
-
A workstation with the
ocbinary installed.
Procedure
Download the
tuftoolbinary from the OpenShift cluster to your workstation.ImportantCurrently, the
tuftoolbinary is only available for Linux operating systems on the x86_64 architecture.- From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gzfile, and set the execution bit:$ gunzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATHenvironment:$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Log in to OpenShift from the command line:
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Switch to the RHTAS project:
$ oc project trusted-artifact-signer
Configure your shell environment:
$ export WORK="${HOME}/trustroot-example" $ export ROOT="${WORK}/root/root.json" $ export KEYDIR="${WORK}/keys" $ export INPUT="${WORK}/input" $ export TUF_REPO="${WORK}/tuf-repo" $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")" $ export TIMESTAMP_EXPIRATION="in 10 days" $ export SNAPSHOT_EXPIRATION="in 26 weeks" $ export TARGETS_EXPIRATION="in 26 weeks" $ export ROOT_EXPIRATION="in 26 weeks"Set the expiration durations according to your requirements.
Create a temporary TUF directory structure:
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"Download the TUF contents to the temporary TUF directory structure:
$ oc extract --to "${KEYDIR}/" secret/tuf-root-keys $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"You can update the timestamp, snapshot, and targets metadata all in one command:
$ tuftool update \ --root "${ROOT}" \ --key "${KEYDIR}/timestamp.pem" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --timestamp-expires "${TIMESTAMP_EXPIRATION}" \ --snapshot-expires "${SNAPSHOT_EXPIRATION}" \ --targets-expires "${TARGETS_EXPIRATION}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"NoteYou can also run the TUF metadata update on a subset of TUF metadata files. For example, the
timestamp.jsonmetadata file expires more often than the other metadata files. Therefore, you can just update the timestamp metadata file by running the following command:$ tuftool update \ --root "${ROOT}" \ --key "${KEYDIR}/timestamp.pem" \ --timestamp-expires "${TIMESTAMP_EXPIRATION}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Only update the root expiration date if it is about to expire:
$ tuftool root expire "${ROOT}" "${ROOT_EXPIRATION}"NoteYou can skip this step if the root file is not close to expiring.
Update the root version:
$ tuftool root bump-version "${ROOT}"Sign the root metadata file again:
$ tuftool root sign "${ROOT}" -k "${KEYDIR}/root.pem"Set the new root version, and copy the root metadata file in place:
$ export NEW_ROOT_VERSION=$(cat "${ROOT}" | jq -r ".signed.version") $ cp "${ROOT}" "${TUF_REPO}/root.json" $ cp "${ROOT}" "${TUF_REPO}/${NEW_ROOT_VERSION}.root.json"Upload these changes to the TUF server:
$ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
1.3. Rotate your certificates and keys
As a systems administrator, you can proactively rotate the certificates and signer keys used by the Red Hat Trusted Artifact Signer (RHTAS) service running on Red Hat OpenShift. Rotating your keys regularly can prevent key tampering, and theft. These procedures guide you through expiring your old certificates and signer keys, and replacing them with a new certificate and signer key for the underlying services that make up RHTAS. You can rotate keys and certificates for the following services:
- Rekor
- Certificate Transparency log
- Fulcio
- Timestamp Authority
1.3.1. Rotating the Rekor signer key
You can proactively rotate Rekor’s signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old Rekor signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. When expiring your old Rekor signer key you can still verify artifacts signed by the old key.
This procedure requires downtime to the Rekor service.
Prerequisites
- Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
-
A workstation with the
oc,openssl, andcosignbinaries installed.
Procedure
Download the
rekor-clibinary from the OpenShift cluster to your workstation.- Login to the OpenShift web console. From the home page, click the ? icon, click Command line tools, go to the rekor-cli download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gzfile, and set the execute bit:$ gunzip rekor-cli-amd64.gz $ chmod +x rekor-cli-amd64
Move and rename the binary to a location within your
$PATHenvironment:$ sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli
Download the
tuftoolbinary from the OpenShift cluster to your workstation.ImportantThe
tuftoolbinary is only available for Linux operating systems.- From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
From a terminal on your workstation, decompress the binary
.gzfile, and set the execute bit:$ gunzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATHenvironment:$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Log in to OpenShift from the command line:
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Switch to the RHTAS project:
$ oc project trusted-artifact-signer
Get the Rekor URL:
$ export REKOR_URL=$(oc get rekor -o jsonpath='{.items[0].status.url}')Get the log tree identifier for the active shard:
$ export OLD_TREE_ID=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .TreeID)
Set the log tree to the
DRAININGstate:$ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver.trusted-artifact-signer.svc:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING --tls_cert_file=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crtWhile draining, the tree log will not accept any new entries. Content from github.com is not included.Watch and wait for the queue to empty.
ImportantYou must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.
Freeze the log tree:
$ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver.trusted-artifact-signer.svc:8091 --tree_id=${OLD_TREE_ID} --tree_state=FROZEN --tls_cert_file=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crtGet the length of the frozen log tree:
$ export OLD_SHARD_LENGTH=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .ActiveTreeSize)
Get Rekor’s public key for the old shard:
$ export OLD_PUBLIC_KEY=$(curl -s $REKOR_URL/api/v1/log/publicKey | base64 | tr -d '\n')
Create a new log tree:
$ export NEW_TREE_ID=$(oc run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver.trusted-artifact-signer.svc:8091 --display_name=rekor-tree --tls_cert_file=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt)
Now you have two log trees, one frozen tree, and a new tree that will become the active shard.
Create a new private key:
$ openssl ecparam -genkey -name prime256v1 -noout -out new-rekor.pem
ImportantThe new key must have a unique file name.
Create a new secret resource with the new signer key:
$ oc create secret generic rekor-signer-key --from-file=private=new-rekor.pem
Update the Securesign Rekor configuration with the new tree identifier and the old sharding information:
$ read -r -d '' SECURESIGN_PATCH_1 <<EOF [ { "op": "replace", "path": "/spec/rekor/treeID", "value": $NEW_TREE_ID }, { "op": "add", "path": "/spec/rekor/sharding/-", "value": { "treeID": $OLD_TREE_ID, "treeLength": $OLD_SHARD_LENGTH, "encodedPublicKey": "$OLD_PUBLIC_KEY" } }, { "op": "replace", "path": "/spec/rekor/signer/keyRef", "value": {"name": "rekor-signer-key", "key": "private"} } ] EOFNoteIf you have
/spec/rekor/signer/keyPasswordRefset with a value, then create a new separate update to remove it:$ read -r -d '' SECURESIGN_PATCH_2 <<EOF [ { "op": "remove", "path": "/spec/rekor/signer/keyPasswordRef" } ] EOFApply this update after applying the first update.
Update the Securesign instance:
$ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH_1"
Wait for the Rekor server to redeploy with the new signer key:
$ oc wait pod -l app.kubernetes.io/name=rekor-server --for=condition=Ready
Get the new public key:
$ export NEW_KEY_NAME=new-rekor.pub $ curl $(oc get rekor -o jsonpath='{.items[0].status.url}')/api/v1/log/publicKey -o $NEW_KEY_NAMEConfigure The Update Framework (TUF) service to use the new Rekor public key.
Configure your shell environment:
$ export WORK="${HOME}/trustroot-example" $ export ROOT="${WORK}/root/root.json" $ export KEYDIR="${WORK}/keys" $ export INPUT="${WORK}/input" $ export TUF_REPO="${WORK}/tuf-repo" $ export TUF_SERVER_POD="$(oc get pods -l app.kubernetes.io/component=tuf,\!job-name -o jsonpath='{.items[0].metadata.name}')"Create a temporary TUF directory structure:
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"Download the TUF contents to the temporary TUF directory structure:
$ oc extract --to "${KEYDIR}/" secret/tuf-root-keys $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"Find the active Rekor signer key file name. Open the latest target file, for example,
1.target.json, within the local TUF repository. In this file you will find the active Rekor signer key file name, for example,rekor.pub. Set an environment variable with this active Rekor signer key file name:$ export ACTIVE_KEY_NAME=rekor.pub
Update the Rekor signer key with the old public key:
$ echo $OLD_PUBLIC_KEY | base64 -d > $ACTIVE_KEY_NAME
Expire the old Rekor signer key:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-rekor-target "${ACTIVE_KEY_NAME}" \ --rekor-uri "${REKOR_URL}" \ --rekor-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Add the new Rekor signer key:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-rekor-target "${NEW_KEY_NAME}" \ --rekor-uri "${REKOR_URL}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Upload these changes to the TUF server:
$ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"Delete the working directory:
$ rm -r $WORK
Update the
cosignconfiguration with the updated TUF configuration:$ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts with the new Rekor signer key.
1.3.2. Rotating the Certificate Transparency log signer key
You can proactively rotate Certificate Transparency (CT) log signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old CT log signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old CT log signer key still allows you to verify artifacts signed by the old key.
Prerequisites
- Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
-
A workstation with the
oc,openssl, andcosignbinaries installed.
Procedure
Download the
tuftoolbinary from the OpenShift cluster to your workstation.ImportantCurrently, the
tuftoolbinary is only available for Linux operating systems on the x86_64 architecture.- From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gzfile, and set the execution bit:$ unzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATHenvironment:$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Log in to OpenShift from the command line:
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Switch to the RHTAS project:
$ oc project trusted-artifact-signer
Make a backup of the current CT log configuration, and keys:
$ export SERVER_CONFIG_NAME=$(oc get ctlog -o jsonpath='{.items[0].status.serverConfigRef.name}') $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.config}" | base64 --decode > config.txtpb $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.fulcio-0}" | base64 --decode > fulcio-0.pem $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.private}" | base64 --decode > private.pem $ oc get secret $SERVER_CONFIG_NAME -o jsonpath="{.data.public}" | base64 --decode > public.pemCapture the current tree identifier:
$ export OLD_TREE_ID=$(oc get ctlog -o jsonpath='{.items[0].status.treeID}')Set the log tree to the
DRAININGstate:$ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver.trusted-artifact-signer.svc:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING --tls_cert_file=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crtWhile draining, the tree log will not accept any new entries. Content from github.com is not included.Watch and wait for the queue to empty.
ImportantYou must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.
Once the queue has been fully drained, freeze the log:
$ oc run --image registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- updatetree --admin_server=trillian-logserver.trusted-artifact-signer.svc:8091 --tree_id=${OLD_TREE_ID} --tree_state=FROZEN --tls_cert_file=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crtCreate a new Merkle tree, and capture the new tree identifier:
$ export NEW_TREE_ID=$(kubectl run createtree --image registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --restart=Never --attach=true --rm=true -q -- -logtostderr=false --admin_server=trillian-logserver.trusted-artifact-signer.svc:8091 --display_name=ctlog-tree --tls_cert_file=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt)
Generate a new certificate, along with new public and private keys:
$ openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem $ openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem $ openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:"CHANGE_ME"Replace CHANGE_ME with a new password.
ImportantThe certificate and new keys must have unique file names.
Update the CT log configuration.
-
Open the
config.txtpbfile for editing. For the frozen log, add the
not_after_limitfield to the frozen log entry, rename the prefix value to a unique name, and replace the old path to the private key withctfe-keys/private-0:... log_configs:{ # frozen log config:{ log_id:2066075212146181968 prefix:"trusted-artifact-signer-0" roots_pem_file:"/ctfe-keys/fulcio-0" private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:"/ctfe-keys/private-0" password:"Example123"}} public_key:{der:"0Y0\x13\x06\x07*\x86H\xce=\x02\x01\x06\x08*\x86H\xce=\x03\x01\x07\x03B\x00\x04)'.\xffUJ\xe2s)\xefR\x8a\xfcO\xdcewȶy\xa7\x9d<\x13\xb0\x1c\x99\x96\xe4'\xe3v\x07:\xc8I+\x08J\x9d\x8a\xed\x06\xe4\xaeI:q\x98\xf4\xbc<o4VD\x0cr\xf9\x9c\xecxT\x84"} not_after_limit:{seconds:1728056285 nanos:012111000} ext_key_usages:"CodeSigning" log_backend_name:"trillian" }NoteYou can get the current time value for seconds and nanoseconds, by running the following commands:
date +%s, anddate +%N.ImportantThe
not_after_limitfield defines the end of the timestamp range for the frozen log only. Certificates beyond this point in time are no longer accepted for inclusion in this log.-
Copy and paste the frozen log
configblock, appending it to the configuration file to create a new entry. Change the following lines in the new
configblock. Set thelog_idto the new tree identifier, change theprefixtotrusted-artifact-signer, change theprivate_keypath toctfe-keys/private, remove thepublic_keyline, and changenot_after_limittonot_after_startand set the timestamp range:... log_configs:{ # frozen log ... # new active log config:{ log_id: NEW_TREE_ID prefix:"trusted-artifact-signer" roots_pem_file:"/ctfe-keys/fulcio-0" private_key:{[type.googleapis.com/keyspb.PEMKeyFile]:{path:"ctfe-keys/private" password:"CHANGE_ME"}} ext_key_usages:"CodeSigning" not_after_start:{seconds:1713201754 nanos:155663000} log_backend_name:"trillian" }Add the NEW_TREE_ID, and replace CHANGE_ME with the new private key password. The password here must match the password used for generating the new private and public keys.
ImportantThe
not_after_startfield defines the beginning of the timestamp range inclusively. This means the log will start accepting certificates at this point in time.
-
Open the
Create a new secret resource:
$ oc create secret generic ctlog-config \ --from-file=config=config.txtpb \ --from-file=private=new-ctlog.pass.pem \ --from-file=public=new-ctlog-public.pem \ --from-file=fulcio-0=fulcio-0.pem \ --from-file=private-0=private.pem \ --from-file=public-0=public.pem \ --from-literal=password=CHANGE_MEReplace CHANGE_ME with the new private key password.
Configure The Update Framework (TUF) service to use the new CT log public key.
Configure your shell environment:
$ export WORK="${HOME}/trustroot-example" $ export ROOT="${WORK}/root/root.json" $ export KEYDIR="${WORK}/keys" $ export INPUT="${WORK}/input" $ export TUF_REPO="${WORK}/tuf-repo" $ export TUF_SERVER_POD="$(oc get pods -l app.kubernetes.io/component=tuf,\!job-name -o jsonpath='{.items[0].metadata.name}')"Create a temporary TUF directory structure:
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"Download the TUF contents to the temporary TUF directory structure:
$ oc extract --to "${KEYDIR}/" secret/tuf-root-keys $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"Find the active CT log public key file name. Open the latest target file, for example,
1.targets.json, within the local TUF repository. In this target file you will find the active CT log public key file name, for example,ctfe.pub. Set an environment variable with this active CT log public key file name:$ export ACTIVE_CTFE_NAME=ctfe.pub
Extract the active CT log public key from OpenShift:
$ oc get secret $(oc get ctlog securesign-sample -o jsonpath='{.status.publicKeyRef.name}') -o jsonpath='{.data.public}' | base64 -d > $ACTIVE_CTFE_NAMEExpire the old CT log signer key:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-ctlog-target "$ACTIVE_CTFE_NAME" \ --ctlog-uri "https://ctlog.rhtas" \ --ctlog-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Add the new CT log signer key:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-ctlog-target "new-ctlog-public.pem" \ --ctlog-uri "https://ctlog.rhtas" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Upload these changes to the TUF server:
$ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"
Update the Securesign CT log configuration with the new tree identifier:
read -r -d '' SECURESIGN_PATCH <<EOF [ { "op": "replace", "path": "/spec/ctlog/serverConfigRef", "value": {"name": "ctlog-config"} }, { "op": "replace", "path": "/spec/ctlog/treeID", "value": $NEW_TREE_ID }, { "op": "replace", "path": "/spec/ctlog/privateKeyRef", "value": {"name": "ctlog-config", "key": "private"} }, { "op": "replace", "path": "/spec/ctlog/privateKeyPasswordRef", "value": {"name": "ctlog-config", "key": "password"} }, { "op": "replace", "path": "/spec/ctlog/publicKeyRef", "value": {"name": "ctlog-config", "key": "public"} } ] EOFPatch the Securesign instance:
$ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
Wait for the CT log server to redeploy:
$ oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready
Delete the working directory:
$ rm -r $WORK
Update the
cosignconfiguration with the updated TUF configuration:$ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts with the new CT log signer key.
1.3.3. Rotating the Fulcio certificate
You can proactively rotate the certificate used by the Fulcio service. This procedure walks you through expiring your old Fulcio certificate, and replacing it with a new certificate for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Fulcio certificate still allows you to verify artifacts signed by the old certificate.
Prerequisites
- Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
-
A workstation with the
oc,openssl, andcosignbinaries installed.
Procedure
Download the
tuftoolbinary from the OpenShift cluster to your workstation.ImportantCurrently, the
tuftoolbinary is only available for Linux operating systems on the x86_64 architecture.- From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gzfile, and set the execution bit:$ gunzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATHenvironment:$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Log in to OpenShift from the command line:
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Switch to the RHTAS project:
$ oc project trusted-artifact-signer
Generate a new certificate, along with new public and private keys:
$ openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem $ openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem $ openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:"CHANGE_ME" $ openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pemReplace CHANGE_ME with a new password.
ImportantThe certificate and new keys must have unique file names.
Create a new secret:
$ oc create secret generic fulcio-config \ --from-file=private=new-fulcio.pass.pem \ --from-file=cert=new-fulcio.cert.pem \ --from-literal=password=CHANGE_MEReplace CHANGE_ME with a new password.
NoteThe password here must match the password used for generating the new private and public keys.
Configure The Update Framework (TUF) service to use the new Fulcio certificate.
Set up your shell environment:
$ export WORK="${HOME}/trustroot-example" $ export ROOT="${WORK}/root/root.json" $ export KEYDIR="${WORK}/keys" $ export INPUT="${WORK}/input" $ export TUF_REPO="${WORK}/tuf-repo" $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"Create a temporary TUF directory structure:
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"Download the TUF contents to the temporary TUF directory structure:
$ oc extract --to "${KEYDIR}/" secret/tuf-root-keys $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"Find the active Fulcio certificate file name. Open the latest target file, for example,
1.targets.json, within the local TUF repository. In this file you will find the active Fulcio certificate file name, for example,fulcio_v1.crt.pem. Set an environment variable with this active Fulcio certificate file name:$ export ACTIVE_CERT_NAME=fulcio_v1.crt.pem
Extract the active Fulcio certificate from OpenShift:
$ oc get secret $(oc get fulcio securesign-sample -o jsonpath='{.status.certificate.caRef.name}') -o jsonpath='{.data.cert}' | base64 -d > $ACTIVE_CERT_NAMEExpire the old certificate:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-fulcio-target "$ACTIVE_CERT_NAME" \ --fulcio-uri "https://fulcio.rhtas" \ --fulcio-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Add the new Fulcio certificate:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-fulcio-target "new-fulcio.cert.pem" \ --fulcio-uri "https://fulcio.rhtas" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Upload these changes to the TUF server:
$ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"Delete the working directory:
$ rm -r $WORK
Update the Securesign Fulcio configuration:
$ read -r -d '' SECURESIGN_PATCH <<EOF [ { "op": "replace", "path": "/spec/fulcio/certificate/privateKeyRef", "value": {"name": "fulcio-config", "key": "private"} }, { "op": "replace", "path": "/spec/fulcio/certificate/privateKeyPasswordRef", "value": {"name": "fulcio-config", "key": "password"} }, { "op": "replace", "path": "/spec/fulcio/certificate/caRef", "value": {"name": "fulcio-config", "key": "cert"} }, { "op": "replace", "path": "/spec/ctlog/rootCertificates", "value": [{"name": "fulcio-config", "key": "cert"}] } ] EOFPatch the Securesign instance:
$ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
Wait for the Fulcio server to redeploy:
$ oc wait pod -l app.kubernetes.io/name=fulcio-server --for=condition=Ready $ oc wait pod -l app.kubernetes.io/name=ctlog --for=condition=Ready
Update the
cosignconfiguration with the updated TUF configuration:$ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts with the new Fulcio certificate.
1.3.4. Rotating the Timestamp Authority signer key and certificate chain
You can proactively rotate the Timestamp Authority (TSA) signer key and certificate chain. This procedure walks you through expiring your old TSA signer key and certificate chain, and replacing them with a new ones for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old TSA signer key and certificate chain still allows you to verify artifacts signed by the old key and certificate chain.
Prerequisites
- Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
-
A workstation with the
ocandopensslbinaries installed.
Procedure
Download the
tuftoolbinary from the OpenShift cluster to your workstation.ImportantCurrently, the
tuftoolbinary is only available for Linux operating systems on the x86_64 architecture.- From the home page, click the ? icon, click Command line tools, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gzfile, and set the execution bit:$ gunzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATHenvironment:$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Log in to OpenShift from the command line:
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Switch to the RHTAS project:
$ oc project trusted-artifact-signer
Generate a new certificate chain, and a new signer key.
ImportantThe new certificate and keys must have unique file names.
Create a temporary working directory:
$ mkdir certs && cd certs
Create the root certificate authority (CA) private key, and set a password:
$ openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes \ -keyout rootCA.key.pem -out rootCA.crt.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA" \ -addext "basicConstraints=CA:true" -addext "keyUsage=cRLSign, keyCertSign"Replace CHANGE_ME with a new password.
Create the intermediate CA private key and certificate signing request (CSR), and set a password:
$ openssl req -newkey rsa:2048 -sha256 \ -keyout intermediateCA.key.pem -out intermediateCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA"Replace CHANGE_ME with a new password.
Sign the intermediate CA certificate with the root CA:
$ openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem \ -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:true\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"Replace CHANGE_ME with the root CA private key password to sign the intermediate CA certificate.
Create the leaf CA private key and CSR, and set a password:
$ openssl req -newkey rsa:2048 -sha256 \ -keyout leafCA.key.pem -out leafCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA"Sign the leaf CA certificate with the intermediate CA:
$ openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem \ -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:false\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"Replace CHANGE_ME with the intermediate CA private key password to sign the leaf CA certificate.
Create the certificate chain by combining the newly created certificates together:
$ cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-tsa.certchain.pem
Create a new secret resource with the signer key:
$ oc create secret generic rotated-signer-key --from-file=rotated-signer-key=certs/leafCA.key.pem
Create a new secret resource with the new certificate chain:
$ oc create secret generic rotated-cert-chain --from-file=rotated-cert-chain=certs/new-tsa.certchain.pem
Create a new secret resource with for the password:
$ oc create secret generic rotated-password --from-literal=rotated-password=CHANGE_MEReplace CHANGE_ME with the intermediate CA private key password.
Find your active TSA certificate file name, the TSA URL string, and configure your shell environment with these values:
$ export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem $ export TSA_URL=$(oc get timestampauthority securesign-sample -o jsonpath='{.status.url}')/api/v1/timestamp $ curl $TSA_URL/certchain -o $ACTIVE_CERT_CHAIN_NAMEUpdate the Securesign TSA configuration:
$ read -r -d '' SECURESIGN_PATCH <<EOF [ { "op": "replace", "path": "/spec/tsa/signer/certificateChain", "value": { "certificateChainRef" : {"name": "rotated-cert-chain", "key": "rotated-cert-chain"} } }, { "op": "replace", "path": "/spec/tsa/signer/file", "value": { "privateKeyRef": {"name": "rotated-signer-key", "key": "rotated-signer-key"}, "passwordRef": {"name": "rotated-password", "key": "rotated-password"} } } ] EOFPatch the Securesign instance:
$ oc patch Securesign securesign-sample --type='json' -p="$SECURESIGN_PATCH"
Wait for the TSA server to redeploy with the new signer key and certificate chain:
$ oc get pods -w -l app.kubernetes.io/name=tsa-server
Get the new certificate chain:
$ export NEW_CERT_CHAIN_NAME=new-tsa.certchain.pem $ curl $TSA_URL/certchain -o $NEW_CERT_CHAIN_NAME
Configure The Update Framework (TUF) service to use the new TSA certificate chain.
Set up your shell environment:
$ export WORK="${HOME}/trustroot-example" $ export ROOT="${WORK}/root/root.json" $ export KEYDIR="${WORK}/keys" $ export INPUT="${WORK}/input" $ export TUF_REPO="${WORK}/tuf-repo" $ export TUF_SERVER_POD="$(oc get pod --selector=app.kubernetes.io/component=tuf --no-headers -o custom-columns=":metadata.name")"Create a temporary TUF directory structure:
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"Download the TUF contents to the temporary TUF directory structure:
$ oc extract --to "${KEYDIR}/" secret/tuf-root-keys $ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"Expire the old TSA certificate:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-tsa-target "$ACTIVE_CERT_CHAIN_NAME" \ --tsa-uri "$TSA_URL" \ --tsa-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Add the new TSA certificate:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-tsa-target "$NEW_CERT_CHAIN_NAME" \ --tsa-uri "$TSA_URL" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Upload these changes to the TUF server:
$ oc rsync "${TUF_REPO}/" "${TUF_SERVER_POD}:/var/www/html"Delete the working directory:
$ rm -r $WORK
Update the
cosignconfiguration with the updated TUF configuration:$ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts that uses the new TSA signer key, and certificate.
1.4. The Policy Controller
As a systems administrator, it is important to control how and when objects get created within your OpenShift Container Platform environment, and within your software supply chain. Starting with Red Hat Trusted Artifact Signer (RHTAS) 1.3, you can run the Policy Controller admission controller to enforce policies by using verifiable supply-chain metadata. Once you install the Policy Controller Operator and create the required resources, you can start enforcing your security policies, and your software supply chain.
The Policy Controller is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.
1.4.1. Trusted Artifact Signer’s implementation of the Sigstore Policy Controller
The Red Hat Trusted Artifact Signer (RHTAS) Policy Controller Operator is an Red Hat OpenShift Container Platform admission controller designed to enforce policies by using supply-chain metadata. Essentially, the RHTAS Policy Controller acts as a gatekeeper for your Red Hat OpenShift cluster by making deployed workloads adhere to your security policies.
The RHTAS Policy Controller has these key features:
- Easy integration with the RHTAS service
- The Policy Controller Operator uses the established, trusted, and transparent services provided by RHTAS, such as Rekor’s transparency log, and Fulcio’s short-lived certificates for stronger signature validation. You can also take advantage of Trusted Artifact Signer’s secure Trust Root as a source of public keys and certificates used in artifact verification, along with auditing Rekor’s transparency log.
- Verification of container image signatures
-
The RHTAS Policy Controller resolves container image tags to validate that the container image being ran does not differ from what was signed by RHTAS service. You can automatically verify signatures and attestations for container images, these can be enforced on a per-namespace basis, and you can create multiple policies to fit your security needs. You can create custom resources, such as,
ClusterImagePolicyto define the rules for validating container images. - Defining and enforcing workload policies
- You can define and enforce policies to restrict what container images can run in your Red Hat OpenShift cluster. One such requirement could be, to only allow specified images to run that match a certain signing key, and to verify attestations. You can chose to enforce strict policies, or use warning mode to better understand how a policy will impact your environment. You can also define and enforce policies based on other supply chain metadata.
1.4.2. Installing the Policy Controller Operator
Before you can start creating policies, and enforcing them, you need to install the Policy Controller Operator by using the Operator Lifecycle Manager (OLM).
Prerequisites
-
Access to the OpenShift web console with the
cluster-adminrole.
Procedure
-
Log in to the OpenShift web console with a user that has the
cluster-adminrole. - From the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
- In the search field, type policy-controller, and click the Policy Controller Operator tile provided by Red Hat.
- Click the Install button to show the operator details.
- Accept the default values, click Install on the Install Operator page, and wait for the installation to finish.
- Once the installation finishes, you can create the Policy Controller resources.
1.4.3. Creating the Policy Controller resources
After installing the Red Hat Trusted Artifact Signer (RHTAS) Policy Controller Operator, you need to create three new resources. These resources are: the base Policy Controller resource, the cluster image policy resource, and the Trust Root resource. This procedure guides you on creating a basic set of these resources.
By default the Policy Controller resyncs the cluster image policies every 10 hours.
Prerequisites
- Installation of the RHTAS Policy Controller Operator.
-
A workstation with the
oc,curl, andtuftoolbinaries installed.
Procedure
Open a terminal on your workstation, and log in to OpenShift:
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Create and switch to the
policy-controller-operatornamespace:$ oc new-project policy-controller-operator ; oc project policy-controller-operator
Create a basic Policy Controller resource.
Configure the Policy Controller to watch your namespaces that match the defined label selector under
spec.policy-controller.webhook.namespaceSelector.matchExpressions:... spec: policy-controller: ... webhook: ... namespaceSelector: matchExpressions: - key: policy.rhtas.com/include operator: In values: ["true"] ...$ cat <<EOF | oc apply -f - apiVersion: rhtas.charts.redhat.com/v1alpha1 kind: PolicyController metadata: name: policycontroller-sample spec: policy-controller: cosign: webhookName: "policy.rhtas.com" webhook: name: webhook extraArgs: webhook-name: policy.rhtas.com mutating-webhook-name: defaulting.clusterimagepolicy.rhtas.com validating-webhook-name: validating.clusterimagepolicy.rhtas.com failurePolicy: Fail namespaceSelector: matchExpressions: - key: policy.rhtas.com/include operator: In values: ["true"] webhookNames: defaulting: "defaulting.clusterimagepolicy.rhtas.com" validating: "validating.clusterimagepolicy.rhtas.com" EOFImportantYou must create this resource in the
policy-controller-operatornamespace.Add the
policy.rhtas.com/include: "true"label to the namespace that you want watched by the Policy Controller:apiVersion: v1 kind: Namespace metadata: labels: policy.rhtas.com/include: "true" name: example-namespaceIf you have a custom Certificate Authority (CA) bundle or self-signed certificates, then you can add your
ConfigMapname and key under thespec.policy-controller.webhook.registryCaBundlesection of the Policy Controller resource:... spec: policy-controller: ... webhook: registryCaBundle: name: CONFIGMAP_NAME key: CA_BUNDLE_KEY ...
Create a Trust Root resource. You have three options for creating the Trust Root resource: a custom TUF repository, using your own keys, or using a serialized TUF root.
Configure these environment variables from the RHTAS services:
$ export TUF_URL="$(oc -n trusted-artifact-signer get tuf -o jsonpath='{.items[0].status.url}')" $ export BASE64_TUF_ROOT="$(curl -fsSL "$TUF_URL/root.json" | base64 -w0)" $ export FULCIO_URL="$(oc -n trusted-artifact-signer get fulcio -o jsonpath='{.items[0].status.url}')" $ export CTLOG_URL="http://ctlog.trusted-artifact-signer.svc.cluster.local" $ export REKOR_URL="$(oc -n trusted-artifact-signer get rekor -o jsonpath='{.items[0].status.url}')" $ export TSA_URL="$(oc -n trusted-artifact-signer get timestampAuthorities -o jsonpath='{.items[0].status.url}')"Option 1. Create the
TrustRootresource for a custom TUF repository:$ cat <<EOF | oc apply -f - apiVersion: policy.sigstore.dev/v1alpha1 kind: TrustRoot metadata: name: trust-root spec: remote: mirror: $TUF_URL root: | $BASE64_TUF_ROOT EOFOption 2. Create a Trust Root with your own keys.
Create and apply the
TrustRootresource using this template:apiVersion: policy.sigstore.dev/v1alpha1 kind: TrustRoot metadata: name: trust-root spec: sigstoreKeys: certificateAuthorities: - subject: organization: fulcio-organization commonName: fulcio-common-name uri: $FULCIO_URL certChain: |- FULCIO_CERT_CHAIN ctLogs: - baseURL: $CTLOG_URL hashAlgorithm: sha-256 publicKey: |- CTFE_PUBLIC_KEY tLogs: - baseURL: $REKOR_URL hashAlgorithm: sha-256 publicKey: |- REKOR_PUBLIC_KEY timestampAuthorities: - subject: organization: tsa-organization commonName: tsa-common-name uri: $TSA_URL certChain: |- TSA_CERT_CHAINNoteSubstitute the public keys and certificate chain values with your specific values for your RHTAS environment.
Option 3. Create a Trust Root for a serialized TUF root.
Create a temporary directory to contain a clone of your TUF root:
$ mkdir -p tuf-repo
Download and clone the TUF repository:
$ curl -s $TUF_URL/root.json > root.json $ tuftool clone --metadata-url=$TUF_URL --metadata-dir=tuf-repo --targets-url=$TUF_URL/targets --targets-dir=tuf-repo/targets --root=root.json
Archive and encode the TUF repository:
$ tar -C ./tuf-repo -czf tuf-repo.tgz . $ export MIRROR_FS=$(base64 -w0 tuf-repo.tgz)
Create the
TrustRootresource:$ cat <<EOF | oc apply -f - apiVersion: policy.sigstore.dev/v1alpha1 kind: TrustRoot metadata: name: trust-root spec: repository: root: |- $BASE64_TUF_ROOT mirrorFS: |- $MIRROR_FS EOF
Create a basic Policy Controller cluster image policy resource.
Configure these environment variables for Fulcio, Rekor, the Trust Root, and the OpenID Connect (OIDC) issuer and subject:
$ export FULCIO_URL="$(oc -n trusted-artifact-signer get fulcio -o jsonpath='{.items[0].status.url}')" $ export REKOR_URL="$(oc -n trusted-artifact-signer get rekor -o jsonpath='{.items[0].status.url}')" $ export TRUST_ROOT_RESOURCE="trust-root" $ export OIDC_ISSUER_URL="https://ISSUER_URL" $ export OIDC_SUBJECT="SUBJECT"Create the
ClusterImagePolicyresource:$ cat <<EOF | oc apply -f - apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: cluster-image-policy spec: images: - glob: "**" authorities: - keyless: url: $FULCIO_URL trustRootRef: $TRUST_ROOT_RESOURCE identities: - issuer: $OIDC_ISSUER_URL subject: $OIDC_SUBJECT ctlog: url: $REKOR_URL trustRootRef: $TRUST_ROOT_RESOURCE rfc3161timestamp: trustRootRef: $TRUST_ROOT_RESOURCE EOFNoteThe
globvalue of**evaluates all container images.
1.5. Signing and verifying AI/ML models
As a systems administrator, you can use Red Hat Trusted Artifact Signer (RHTAS) to sign and verify artificial intelligence (AI) and machine learning (ML) models. You can integrate AI/ML model signing and verification into your Continuous Integration and Continuous Deployment (CI/CD) pipelines, or by using the command-line interface (CLI). Doing this can enhance the security of your software supply chain workloads when running on Red Hat OpenShift by only using valid AI/ML models.
Signing and verifying AI/ML models that use the Model Validation Operator or that use the CLI are a Technology Preview features only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.
1.5.1. Building a client trust configuration for Model Validation
Before signing artificial intelligence (AI) and machine learning (ML) models with Red Hat Trusted Artifact Signer (RHTAS), you need to generate a client trust configuration that uses The Update Framework (TUF) Trust Root for your RHTAS environment.
On RHTAS 1.2 and below, the Rekor key is configured to use SHA384 encryption. You must rotate the Rekor signer key to use SHA256. If you do not change the encryption type for Rekor, then verifying artifacts will cause mismatch errors.
For more information about this issue, see the RHTAS Release Notes.
Prerequisites
-
Access to the OpenShift web console with the
cluster-adminrole. - Installation of RHTAS running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
-
A workstation with the
ocbinary installed.
Procedure
Open a terminal on your workstation, log in to OpenShift from the command line:
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Configure your shell environment:
$ export WORK="${HOME}/trustroot-example" $ export SIGNED_TRUST_ROOT="${WORK}/root/trusted_root.json" $ export TUF_REPO="${WORK}/tuf-repo" $ export TUF_SERVER_POD="$(oc get pods -l app.kubernetes.io/component=tuf,\!job-name -o jsonpath='{.items[0].metadata.name}' -n trusted-artifact-signer)" $ export CA_URL=$(oc get fulcio -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) $ export TLOG_URL=$(oc get rekor -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) $ export OIDC_URL="OIDC_ISSUER_URL"Replace OIDC_ISSUER_URL with your OIDC provider’s URL address.
Create the temporary TUF directories:
$ mkdir -p "${WORK}/root/" "${TUF_REPO}"Download the signed target trust root file to the temporary TUF directories:
$ oc cp "${TUF_SERVER_POD}:/var/www/html" "${TUF_REPO}" -n trusted-artifact-signer $ cp "${TUF_REPO}/targets/DIGEST.trusted_root.json" "${SIGNED_TRUST_ROOT}"An example signed target trust root file name looks similar to this format,
c03afd04e353889093e5b16b019656b23a57.trusted_root.json, where your DIGEST value would be different.Create a script for making the client trust configuration used by the CLI:
$ cat > make_trust_config.sh <<'EOF' #!/bin/bash # Usage: ./make-trust-config.sh <trusted_root_input.json> <output.json> [caUrl] [oidcUrl] [tlogUrl] if [ "$#" -lt 2 ]; then echo "Usage: $0 <trusted_root_input.json> <output.json> [caUrl] [oidcUrl] [tlogUrl]" exit 1 fi INPUT_FILE="$1" OUTPUT_FILE="$2" CA_URL=${3:-${CA_URL:-""}} OIDC_URL=${4:-${OIDC_URL:-""}} TLOG_URL=${5:-${TLOG_URL:-""}} # Check for jq if ! command -v jq &> /dev/null; then echo "Error: 'jq' is required but not installed." exit 1 fi jq -n \ --argjson trustedRoot "$(cat $INPUT_FILE)" \ --arg caUrl "$CA_URL" \ --arg oidcUrl "$OIDC_URL" \ --arg tlogUrl "$TLOG_URL" \ '{ mediaType: "application/vnd.dev.sigstore.clienttrustconfig.v0.1+json", trustedRoot: $trustedRoot, signingConfig: { caUrl: $caUrl, oidcUrl: $oidcUrl, tlogUrls: [$tlogUrl] } }' > "$OUTPUT_FILE" EOFMake the script executable:
$ chmod u+x make_trust_config.sh
Run the
make_trust_config.shscript:$ ./make_trust_config.sh $SIGNED_TRUST_ROOT trust_config.json
A new
trust_config.jsonfile is created in the current working directory.- You can now start signing and verifying AL/ML models by using command-line interface.
Additional resources
1.5.2. Signing and verifying AI/ML models by using the command-line interface
With Red Hat Trusted Artifact Signer (RHTAS), you can sign, and verify signatures on artificial intelligence (AI) and machine learning (ML) models by using the model-transparency command-line interface (CLI). For the CLI to sign and verify the AI and ML models it must know about your Trust Root. The signing and verifying commands run inside a container image, and does not require a locally installed binary.
Signing and verifying AI/ML models by using the CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.
On RHTAS 1.2 and below, the Rekor key is configured to use SHA384 encryption. You must rotate the Rekor signer key to use SHA256. If you do not change the encryption type for Rekor, then verifying artifacts will cause mismatch errors.
For more information about this issue, see the RHTAS Release Notes.
Prerequisites
- Installation of RHTAS running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
- An OpenID Connect (OIDC) identity for retrieving tokens or client credentials.
-
A workstation with the
podmanbinary installed.
Procedure
Configure your shell environment:
$ export OIDC_ISSUER="OIDC_ISSUER_URL" $ export MODEL_IMAGE="registry.redhat.io/rhtas/model-transparency-rhel9@sha256:6db7fa2b956875a6f507811166b47b164d463dea78ab4403c6d7648d838b8acb" $ export MODEL_DIR="PATH_TO_MODEL_DIRECTORY" $ export TRUST_CFG="$(pwd)/trust_config.json" $ export SIG_PATH="$MODEL_DIR/model.sig"
Replace OIDC_ISSUER_URL with your OIDC provider URL address.
Replace PATH_TO_MODEL_DIRECTORY with the absolute path to the directory containing the AI/ML models.
There are two options for signing a model by using the CLI. You can use an identity token and a client identifier, or just the client identifier. Using an identity token is the non-interactive way, where as using a client identifier is the interactive way.
NoteWhen using self-signed certificates or a custom Certificate Authority (CA), you have to pass those certificates to the container to successfully sign an AI/ML model.
Option 1. Signing AI/ML models with an identity token:
$ podman run --rm \ --userns=keep-id --user "$(id -u)":"$(id -g)" --group-add keep-groups \ -v "$MODEL_DIR":/model:Z,U \ -v "$TRUST_CFG":/trust_config.json:Z,ro \ -w /model "$MODEL_IMAGE" \ sign sigstore \ --trust_config /trust_config.json \ --signature /model/model.sig \ --identity_token "OIDC_TOKEN" \ --client_id CLIENT_ID \ /model
Replace OIDC_TOKEN with your OIDC authentication token.
Replace CLIENT_ID with your OIDC client identifier.
Option 2. Signing AI/ML models by using a client identifier:
$ podman run --rm -it \ --userns=keep-id --user "$(id -u)":"$(id -g)" --group-add keep-groups \ -v "$MODEL_DIR":/model:Z,U \ -v "$TRUST_CFG":/trust_config.json:Z,ro \ -w /model "$MODEL_IMAGE" \ sign sigstore \ --trust_config "/trust_config.json" \ --signature "/model/model.sig" \ --client_id CLIENT_ID \ /modelReplace CLIENT_ID with your OIDC client identifier.
Verify a model signature that uses the CLI:
$ podman run --rm -it \ --userns=keep-id --user "$(id -u)":"$(id -g)" --group-add keep-groups \ -v "$MODEL_DIR":/model:Z,U \ -v "$TRUST_CFG":/trust_config.json:Z,ro \ -w /model "$MODEL_IMAGE" \ verify sigstore \ --trust_config "/trust_config.json" \ --signature "/model/model.sig" \ --identity IDENTITY \ --identity_provider "$OIDC_ISSUER" \ /modelReplace IDENTITY with an email address or with a SPIFFE or URI subject.
1.5.3. Installing and configuring the Model Validation Operator
The Model Validation Operator gives you the ability to verify signed artificial intelligence (AI) and machine learning (ML) models at runtime for Red Hat OpenShift environments. This Operator allows you to create a ModelValidation custom resource (CR) in a project namespace, and then you can add a label to your pod for validation. You can use a mutating admission webhook that injects a short-lived step to validate an AI/ML model and signature that uses Red Hat Trusted Artifact Signer (RHTAS), and The Update Framework (TUF) to verify the signature, identity, and the issuer. If the validation process succeeds, then the pod proceeds, but if validation fails, then the pod admission is denied.
The Model Validation Operator feature is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.
Prerequisites
- Red Hat OpenShift Container Platform 4.16 or later.
-
Access to the OpenShift web console with the
cluster-adminrole. - Installation of RHTAS running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
-
A workstation with the
ocbinary installed.
Procedure
-
Log in to the OpenShift web console with a user that has the
cluster-adminrole. - From the Administrator perspective, expand the Operators navigation menu, and click OperatorHub.
- In the search field, type Model Validation Operator, and click the tile that is displayed.
- Click the Install button to show the operator details.
- Accept the default values, click Install on the Install Operator page, and wait for the installation to finish.
- Once the installation finishes, click View Operator.
- Add the AI/ML model, its signature, and your signed Trust Root configuration to the namespace you want validation done. This is typically done by creating a Persistent Volume Claim (PVC) on the OpenShift cluster, and copying these files to the PVC.
Create a
ModelValidationCR in the namespace you want validation done.- Click the modelvalidation tab, and click the Create modelvalidation button.
On the Create modelvalidation page, select YAML view. Update the YAML file accordingly:
apiVersion: ml.sigstore.dev/v1alpha1 kind: ModelValidation metadata: name: model-validation-example namespace: NAMESPACE spec: config: sigstoreConfig: certificateIdentity: "IDENTITY" certificateOidcIssuer: "OIDC_ISSUER" clientTrustConfig: trustConfigPath: SIGNED_TRUST_ROOT model: path: PATH_TO_MODEL signaturePath: PATH_TO_MODEL_SIGNATURE
Replace NAMESPACE with the same namespace where your workloads run.
Replace IDENTITY with the signer’s email address.
Replace OIDC_ISSUER with your OIDC provider’s issuer URL address.
Replace SIGNED_TRUST_ROOT with the signed Trust Root target file, for example,
/data/trust-config.json.Replace PATH_TO_MODEL with the path to the model file, for example,
/data/model.onnx.Replace PATH_TO_MODEL_SIGNATURE with the path to the model’s signature file, for example,
/data/model.sig.- Click the Create button.
From your terminal session, create a new pod CR in the namespace you want to trigger a validation check. Update this example YAML file with your specific information:
apiVersion: v1 kind: Pod metadata: name: model-validation-pod-example namespace: NAMESPACE labels: validation.ml.sigstore.dev/ml: "model-validation-example" spec: containers: - name: app image: nginx ports: - containerPort: 80 volumeMounts: - name: model-storage-example mountPath: PATH_TO_WORKLOAD_VOLUME volumes: - name: model-storage-example persistentVolumeClaim: claimName: PVC_NAME
You configure the webhook by using this label key,
validation.ml.sigstore.dev/ml, with the value of theModelValidationCR name created earlier, surrounded by double quotes.Replace NAMESPACE, PATH_TO_WORKLOAD_VOLUME, and PVC_NAME with values appropriate to your environment.
Create the new pod by applying the CR file:
oc apply -f _PATH_TO_CR_FILE_
- Now the webhook can intercept pod create and update requests. Next, the Model Validation Operator injects validation steps that reads the AI/ML model and its signature, and checks it against your Trust Root for RHTAS. If the validation check succeeds, then the pod creation or modification proceeds on.
1.6. Using your own certificate authority bundle
You can bring your organization’s certificate authority (CA) bundle for signing and verifying your build artifacts with Red Hat’s Trusted Artifact Signer (RHTAS) service.
Prerequisites
- Installation of the RHTAS operator running on Red Hat OpenShift Container Platform.
- A running Securesign instance.
- Your CA root certificate.
-
A workstation with the
ocbinary installed.
Procedure
Log in to OpenShift from the command line:
oc login --token=TOKEN --server=SERVER_URL_AND_PORT
$ oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443
NoteYou can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, if asked, and click Display Token to view the command.
Switch to the RHTAS project:
$ oc project trusted-artifact-signer
Create a new ConfigMap by using your organization’s CA root certificate bundle:
$ oc create configmap custom-ca-bundle --from-file=ca-bundle.crt
ImportantThe certificate filename must be
ca-bundle.crt.Open the Securesign resource for editing:
$ oc edit Securesign securesign-sample
Add the
rhtas.redhat.com/trusted-caunder themetadata.annotationssection:apiVersion: rhtas.redhat.com/v1alpha1 kind: Securesign metadata: name: example-instance annotations: rhtas.redhat.com/trusted-ca: custom-ca-bundle spec: ...
- Save, and quit the editor.
Open the Fulcio resource for editing:
$ oc edit Fulcio securesign-sample
Add the
rhtas.redhat.com/trusted-caunder themetadata.annotationssection:apiVersion: rhtas.redhat.com/v1alpha1 kind: Fulcio metadata: name: example-instance annotations: rhtas.redhat.com/trusted-ca: custom-ca-bundle spec: ...- Save, and quit the editor.
- Wait for the RHTAS operator to reconfigure before signing and verifying artifacts.
Chapter 2. Red Hat Enterprise Linux
2.1. Protect your signing data
As a systems administrator, protecting the signing data of your software supply chain is critical when there is data loss due to hardware failure or accidental data deletion.
For Red Hat Trusted Artifact Signer (RHTAS) deployments on Red Hat Enterprise Linux, you can simply create encrypted backups of your signing data to a local file system.
2.1.1. Backing up your Trusted Artifact Signer data
You can schedule automatic backups of your Red Hat Trusted Artifact Signer (RHTAS) data to a mounted file system. Data backups are encrypted with SSL, and compressed.
The RHTAS service does not support concurrent manual backup and restore operations.
Prerequisites
- Red Hat Enterprise Linux 9.4 or later.
- A deployment of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
- A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
- Open for editing the RHTAS Ansible Playbook.
Under the
tas_single_node_backup_restore.backupsection, set theenabledvariable totrue:tas_single_node_backup_restore: backup: enabled: trueBy default, a daily backup job runs at midnight every day. You can change this to better fit your schedule.
tas_single_node_backup_restore: backup: enabled: true schedule: "*-*-* 00:00:00"Set a
passphrase, and specify the local backup directory:tas_single_node_backup_restore: backup: enabled: true schedule: "*-*-* 00:00:00" force_run: false passphrase: "example123" directory: /root/tas_backups-
Optional. To start an immediate backup job, set the
force_runvariable totrue. - Save the changes, and quit the editor.
Run the RHTAS Ansible Playbook to apply the changes:
ansible-playbook -i inventory play.yml
After the backup finishes, the resulting encrypted, and compressed file name format is,
BACKUP-<date-and-time>-UTC.tar.gz.enc.
2.1.2. Restoring your Trusted Artifact Signer data
You can restore snapshots of your Red Hat Trusted Artifact Signer (RHTAS) data from a backup source.
Prerequisites
- Red Hat Enterprise Linux 9.4 or later.
- A deployment of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
- A SSH connection to the managed node, with root-level privileges on the managed node.
- The backup source file is available.
- Know the passphrase used for the backup source.
Procedure
- Copy the backup data file to a directory on the Ansible control node.
- Open for editing the RHTAS Ansible Playbook.
Under the
tas_single_node_backup_restore.restoresection, set theenabledvariable totrue:tas_single_node_backup_restore: ... restore: enabled: trueSpecify the source location of the backup file, and give the correct passphrase:
tas_single_node_backup_restore: ... restore: enabled: true source: "PATH_TO_BACKUP_FILE" passphrase: "example123"-
Under the
tas_single_node_backup_restore.backupsection, verify that theforce_runvariable tofalse. If theforce_runvariable totrue, then set it tofalse. . Run the RHTAS Ansible Playbook to apply the changes:
$ ansible-playbook -i inventory play.yml
The restoration process starts, and does a re-execution of all tasks to validate the integrity of the RHTAS service.
2.2. The Update Framework
As a systems administrator, understanding Red Hat’s implementation of The Update Framework (TUF) for Red Hat Trusted Artifact Signer (RHTAS) is important in helping you to maintaining a secure coding environment for developers. You can refresh TUF’s root and non-root metadata periodically to help prevent mix-and-match attacks on a code base. Refreshing the TUF metadata gives clients the ability to detect and reject outdated or tampered-with files.
2.2.1. Trusted Artifact Signer’s implementation of The Update Framework
Starting with Red Hat Trusted Artifact Signer (RHTAS) version 1.1, we implemented The Update Framework (TUF) as a trust root to store public keys, and certificates used by RHTAS services. The Update Framework is a sophisticated framework for securing software update systems, and this makes it ideal for securing shipped artifacts. The Update Framework refers to the RHTAS services as trusted root targets. There are four trusted targets, one for each RHTAS service: Fulcio, Certificate Transparency (CT) log, Rekor, and Timestamp Authority (TSA). Client software, such as cosign, use the RHTAS trust root targets to sign and verify artifact signatures. A simple HTTP server distributes the public keys and certificates to the client software. This simple HTTP server has the TUF repository of the individual targets.
By default, when deploying RHTAS on Red Hat OpenShift or Red Hat Enterprise Linux, we create a TUF repository, and prepopulate the individual targets. By default, the expiration date of all metadata files is 52 weeks from the time you deploy the RHTAS service. Red Hat recommends choosing shorter expiration periods, and rotating your public keys and certificates often. Doing these maintenance tasks regularly can help prevent attacks on your code base.
2.2.2. Updating The Update Framework metadata files
By default, The Update Framework (TUF) metadata files expire after 52 weeks from the Red Hat Trusted Artifact Signer (RHTAS) deployment date. At a minimum, you have to update the TUF metadata files at least once every 52 weeks before they expire. Red Hat recommends updating the metadata files more often than once a year.
This procedure walks you through refreshing the root, and non-root metadata files.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux (RHEL) managed by Ansible.
-
A workstation with the
rsync, andpodmanbinaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
tuftoolbinary from the local command-line interface (CLI) tool download page to your workstation.NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostnamevariable. An example URL address would be,Content from cli-server.example.com is not included.https://cli-server.example.com, given thetas_single_node_base_hostnamevalue asexample.com.ImportantCurrently, the
tuftoolbinary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gzfile, and set the execution bit:$ gunzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATHenvironment:$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Configure your shell environment:
$ export WORK="${HOME}/trustroot-example" $ export ROOT="${WORK}/root/root.json" $ export KEYDIR="${WORK}/keys" $ export INPUT="${WORK}/input" $ export TUF_REPO="${WORK}/tuf-repo" $ export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE $ export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE $ export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') $ export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') $ export TIMESTAMP_EXPIRATION="in 10 days" $ export SNAPSHOT_EXPIRATION="in 26 weeks" $ export TARGETS_EXPIRATION="in 26 weeks" $ export ROOT_EXPIRATION="in 26 weeks"Replace IP_OF_ANSIBLE_MANAGED_NODE and USER_TO_CONNECT_TO_MANAGED_NODE with your relevant values.
Set the expiration durations according to your requirements.
Create a temporary TUF directory structure:
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"Download the TUF contents to the temporary TUF directory structure:
$ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" $ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"You can update the timestamp, snapshot, and targets metadata all in one command:
$ tuftool update \ --root "${ROOT}" \ --key "${KEYDIR}/timestamp.pem" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --timestamp-expires "${TIMESTAMP_EXPIRATION}" \ --snapshot-expires "${SNAPSHOT_EXPIRATION}" \ --targets-expires "${TARGETS_EXPIRATION}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"NoteYou can also run the TUF metadata update on a subset of TUF metadata files. For example, the
timestamp.jsonmetadata file expires more often than the other metadata files. Therefore, you can just update the timestamp metadata file by running the following command:$ tuftool update \ --root "${ROOT}" \ --key "${KEYDIR}/timestamp.pem" \ --timestamp-expires "${TIMESTAMP_EXPIRATION}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Only update the root expiration date if it is about to expire:
$ tuftool root expire "${ROOT}" "${ROOT_EXPIRATION}"NoteYou can skip this step if the root file is not close to expiring.
Update the root version:
$ tuftool root bump-version "${ROOT}"Sign the root metadata file again:
$ tuftool root sign "${ROOT}" -k "${KEYDIR}/root.pem"Set the new root version, and copy the root metadata file in place:
$ export NEW_ROOT_VERSION=$(cat "${ROOT}" | jq -r ".signed.version") $ cp "${ROOT}" "${TUF_REPO}/root.json" $ cp "${ROOT}" "${TUF_REPO}/${NEW_ROOT_VERSION}.root.json"Upload these changes to the TUF server.
Create a compressed archive of the TUF repository:
$ tar -C "${WORK}" -czvf repository.tar.gz tuf-repoUpdate the RHTAS Ansible Playbook with these two lines:
tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"Run the RHTAS Anisble Playbook to apply the changes:
$ ansible-playbook -i inventory play.yml
Additional resources
2.3. Rotate your certificates and keys
As a systems administrator, you can proactively rotate the certificates and signer keys used by the Red Hat Trusted Artifact Signer (RHTAS) service running on Red Hat OpenShift. Rotating your keys regularly can prevent key tampering, and theft. These procedures guide you through expiring your old certificates and signer keys, and replacing them with a new certificate and signer key for the underlying services that make up RHTAS. You can rotate keys and certificates for the following services:
- Rekor
- Certificate Transparency log
- Fulcio
- Timestamp Authority
2.3.1. Rotating the Rekor signer key
You can proactively rotate Rekor’s signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old Rekor signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. When expiring your old Rekor signer key you can still verify artifacts signed by the old key.
This procedure requires downtime to the Rekor service.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
-
A workstation with the
rsync,openssl, andcosignbinaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
rekor-clibinary from the local command-line interface (CLI) tool download page to your workstation.Open a web browser, and go to the CLI server web page.
NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostnamevariable. An example URL address would be,Content from cli-server.example.com is not included.https://cli-server.example.com, given that the value oftas_single_node_base_hostnameisexample.com.- From the download page, go to the rekor-cli download section, and click the link for your platform.
From a terminal on your workstation, decompress the binary
.gzfile, and set the execute bit:$ gunzip rekor-cli-amd64.gz $ chmod +x rekor-cli-amd64
Move and rename the binary to a location within your
$PATHenvironment:$ sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli
Download the
tuftoolbinary from the local command-line interface (CLI) tool download page to your workstation.ImportantCurrently, the
tuftoolbinary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
From a terminal on your workstation, decompress the binary
.gzfile, and set the execute bit:$ gunzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATHenvironment:$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Assign shell variables to the base hostname, and the Rekor URL:
$ export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE $ export REKOR_URL=https://rekor.${BASE_HOSTNAME}Replace BASE_HOSTNAME_OF_RHTAS_SERVICE with the value of the
tas_single_node_base_hostnamevariable.Get the log tree identifier for the active shard:
$ export OLD_TREE_ID=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .TreeID)
Configure your shell environment:
$ export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE $ export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE $ export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') $ export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]')
Replace IP_OF_ANSIBLE_MANAGED_NODE and USER_TO_CONNECT_TO_MANAGED_NODE with values for your environment.
Set the log tree to the
DRAININGstate:$ ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --admin_server=trillian-logserver-pod:8091 --tree_id=${OLD_TREE_ID} --tree_state=DRAINING"While draining, the tree log will not accept any new entries. Content from github.com is not included.Watch and wait for the queue to empty.
ImportantYou must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.
Freeze the log tree:
$ ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=FROZEN"Get the length of the frozen log tree:
$ export OLD_SHARD_LENGTH=$(rekor-cli loginfo --rekor_server $REKOR_URL --format json | jq -r .ActiveTreeSize)
Get Rekor’s public key for the old shard:
$ export OLD_PUBLIC_KEY=$(curl -s $REKOR_URL/api/v1/log/publicKey | base64 | tr -d '\n')
Create a new log tree:
$ export NEW_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run -q --network=rhtas --rm registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --logtostderr=false --admin_server=trillian-logserver-pod:8091 --display_name=rekor-tree | tr -d '[:punct:][:blank:][:cntrl:]'")Now you have two log trees, one frozen tree, and a new tree that will become the active shard.
Create a new private key and an associated public key:
$ openssl ecparam -genkey -name prime256v1 -noout -out new-rekor.pem $ openssl ec -in new-rekor.pem -pubout -out new-rekor.pub $ export NEW_KEY_NAME=new-rekor.pub
ImportantThe new key must have a unique file name.
Get the active Rekor signing key, and save the key to a file:
$ rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/rekor-signer0.key ./rekor-signer0.key echo "$OLD_PUBLIC_KEY" | base64 -d > rekor.pubUpdate the Rekor configuration in the RHTAS Ansible playbook:
tas_single_node_rekor: active_signer_id: "new-rekor-key" active_tree_id: NEW_TREE_ID private_keys: - id: "new-rekor-key" key: | {{ lookup('file', 'new-rekor.pem') }} - id: "private-0" key: | {{ lookup('file', 'rekor-signer0.key') }} public_keys: - id: "new-rekor-pubkey" key: | {{ lookup('file', 'new-rekor.pub') }} - id: "public-0" key: | {{ lookup('file', 'rekor.pub') }} sharding_config: - tree_id: OLD_TREE_ID tree_length: OLD_SHARD_LENGTH pem_pub_key: "public-0"
Configure The Update Framework (TUF) service to use the new Rekor public key.
Configure your shell environment:
$ export WORK="${HOME}/trustroot-example" $ export ROOT="${WORK}/root/root.json" $ export KEYDIR="${WORK}/keys" $ export INPUT="${WORK}/input" $ export TUF_REPO="${WORK}/tuf-repo" $ export TUF_URL="https://tuf.${BASE_HOSTNAME}"Create a temporary TUF directory structure:
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"Download the TUF contents to the temporary TUF directory structure:
$ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" $ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"Assign an environment variable to the active Rekor signer key file name:
$ export ACTIVE_KEY_NAME=rekor.pub
Expire the old Rekor signer key:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-rekor-target "${ACTIVE_KEY_NAME}" \ --rekor-uri "${REKOR_URL}" \ --rekor-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Add the new Rekor signer key:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-rekor-target "${NEW_KEY_NAME}" \ --rekor-uri "${REKOR_URL}" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Create a compressed archive file of the updated TUF repository:
$ tar -C "${WORK}" -czvf repository.tar.gz tuf-repoUpdate the RHTAS Ansible playbook by adding the new compressed archive file name to the
tas_single_node_trust_rootvariable:tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"Delete the working directory:
$ rm -r $WORK
Run the RHTAS Ansible Playbook to apply the changes:
$ ansible-playbook -i inventory play.yml
Update the
cosignconfiguration with the updated TUF configuration:$ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts with the new Rekor signer key.
2.3.2. Rotating the Certificate Transparency log signer key
You can proactively rotate Certificate Transparency (CT) log signer key by using the sharding feature to freeze the log tree, and create a new log tree with a new signer key. This procedure walks you through expiring your old CT log signer key, and replacing it with a new signer key for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old CT log signer key still allows you to verify artifacts signed by the old key.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
-
A workstation with the
rsync,openssl, andcosignbinaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
tuftoolbinary from the local command-line interface (CLI) tool download page to your workstation.NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostnamevariable. An example URL address would be,Content from cli-server.example.com is not included.https://cli-server.example.com, given thetas_single_node_base_hostnamevalue asexample.com.ImportantCurrently, the
tuftoolbinary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gzfile, and set the execution bit:$ unzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATHenvironment:$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Configure your shell environment:
$ export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE $ export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE $ export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') $ export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') $ export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE
Replace BASE_HOSTNAME_OF_RHTAS_SERVICE with the value of the
tas_single_node_base_hostnamevariable.Download the CTlog configuration map, the CTlog keys, and the Fulcio root certificate to your workstation:
$ rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/configs/ctlog-config.yaml ./ctlog-config.yaml $ rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/ctlog0.key ./ctfe.key $ rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/ctlog0.pub ./ctfe.pub $ rsync --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/fulcio.pem ./fulcio-0.pemCapture the current tree identifier:
$ export OLD_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo cat /etc/rhtas/configs/ctlog-treeid-config.yaml | grep 'tree_id:' | awk '{print \$2}'" | tr -d '[:punct:][:blank:][:cntrl:]')Set the log tree to the
DRAININGstate:$ ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=DRAINING"While draining, the tree log will not accept any new entries. Content from github.com is not included.Watch and wait for the queue to empty.
ImportantYou must wait for the queues to be empty before proceeding to the next step. If leaves are still integrating while draining, then freezing the log tree during this process can cause the log path to exceed the maximum merge delay (MMD) threshold.
Once the queue has been fully drained, freeze the log:
$ ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run --network=rhtas --rm registry.redhat.io/rhtas/updatetree-rhel9:1.1.0 --tree_id=${OLD_TREE_ID} --admin_server=trillian-logserver-pod:8091 --tree_state=FROZEN"Create a new Merkle tree, and capture the new tree identifier:
$ export NEW_TREE_ID=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman run -q --network=rhtas --rm registry.redhat.io/rhtas/createtree-rhel9:1.1.0 --logtostderr=false --admin_server=trillian-logserver-pod:8091 --display_name=ctlog-tree" | tr -d '[:punct:][:blank:][:cntrl:]')Generate a new certificate, along with new public and private keys:
$ openssl ecparam -genkey -name prime256v1 -noout -out new-ctlog.pem $ openssl ec -in new-ctlog.pem -pubout -out new-ctlog-public.pem $ openssl ec -in new-ctlog.pem -out new-ctlog.pass.pem -des3 -passout pass:"CHANGE_ME"Replace CHANGE_ME with a new password.
ImportantThe certificate and new keys must have unique file names.
Update the CT log configuration.
- Open the RHTAS Ansible playbook for editing.
Configuring the CTlog signer key rotation for the first time, you need to add the following to the
tas_single_node_ctlog.sharding_configsection:tas_single_node_ctlog: sharding_config: - treeid: OLD_TREE_ID # frozen log prefix: "rhtasansible" private_key: "private-0" password: "rhtas" root_pem_file: "/ctfe-keys/fulcio-0" not_after_limit: seconds: 1728056285 nanos: 012111000Replace OLD_TREE_ID with the contents contained in the
$OLD_TREE_IDenvironment variable.NoteYou can get the current time value for seconds and nanoseconds, by running the following commands:
date +%s, anddate +%N.ImportantThe
not_after_limitfield defines the end of the timestamp range for the frozen log only. Certificates beyond this point in time are no longer accepted for inclusion in this log.-
Copy and paste the frozen log block, appending it to the
tas_single_node_ctlog.sharding_configsection, creating a new entry. Change the following lines in the new log block. Set the
treeidto the new tree identifier, change theprefixtotrusted-artifact-signer, change theprivate_keypath toprivate-1, changenot_after_limittonot_after_start, set the timestamp range, and updatetas_single_node_fulcio.ct_log_prefixfor Fulcio to make use of the new log:tas_single_node_ctlog: sharding_config: ... # frozen log - treeid: NEW_TREE_ID # new active log prefix: "trusted-artifact-signer" private_key: "private-1" password: "CHANGE_ME" root_pem_file: "/ctfe-keys/fulcio-0" not_after_start: seconds: 1713201754 nanos: 155663000 tas_single_node_fulcio: ct_log_prefix: "trusted-artifact-signer"Replace CHANGE_ME with the new private key password. The password here must match the password used for generating the new private and public keys.
ImportantThe
not_after_startfield defines the beginning of the timestamp range inclusively. This means the log will start accepting certificates at this point in time.
Update the
tas_single_node_ctlogsection for CTlog to distribute the new keys to the managed node:tas_single_node_ctlog: ... private_keys: - id: private-0 key: | {{ lookup('file', 'ctfe.key') }} - id: private-1 key: | {{ lookup('file', 'new-ctlog.pass.pem') }} public_keys: - id: public-0 key: | {{ lookup('file', 'ctfe.pub') }} - id: public-1 key: | {{ lookup('file', 'new-ctlog-public.pem') }}Configure The Update Framework (TUF) service to use the new CT log public key.
Configure your shell environment:
$ export WORK="${HOME}/trustroot-example" $ export ROOT="${WORK}/root/root.json" $ export KEYDIR="${WORK}/keys" $ export INPUT="${WORK}/input" $ export TUF_REPO="${WORK}/tuf-repo" $ export TUF_URL="https://tuf.${BASE_HOSTNAME}"Create a temporary TUF directory structure:
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"Download the TUF contents to the temporary TUF directory structure:
$ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" $ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"Assign an environment variable to the active CT log signer key file name:
$ export ACTIVE_CTFE_NAME=ctfe.pub
Expire the old CT log signer key:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-ctlog-target "$ACTIVE_CTFE_NAME" \ --ctlog-uri "https://ctlog.rhtas" \ --ctlog-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Add the new CT log signer key:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-ctlog-target "new-ctlog-public.pem" \ --ctlog-uri "https://ctlog.rhtas" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Create a compressed archive file of the updated TUF repository:
$ tar -C "${WORK}" -czvf repository.tar.gz tuf-repoUpdate the RHTAS Ansible playbook by adding the new compressed archive file name to the
tas_single_node_trust_rootvariable:tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"- Save the changes to the playbook, and close your text editor.
Run the RHTAS Ansible playbook to apply the changes:
$ ansible-playbook -i inventory play.yml
Delete the working directory:
$ rm -r $WORK
Update the
cosignconfiguration with the updated TUF configuration:$ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts with the new CT log signer key.
2.3.3. Rotating the Fulcio certificate
You can proactively rotate the certificate used by the Fulcio service. This procedure walks you through expiring your old Fulcio certificate, and replacing it with a new certificate for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old Fulcio certificate still allows you to verify artifacts signed by the old certificate.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
-
A workstation with the
rsync,openssl, andcosignbinaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
tuftoolbinary from the local command-line interface (CLI) tool download page to your workstation.NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostnamevariable. An example URL address would be,Content from cli-server.example.com is not included.https://cli-server.example.com, given thetas_single_node_base_hostnamevalue asexample.com.ImportantCurrently, the
tuftoolbinary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gzfile, and set the execution bit:$ gunzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATHenvironment:$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Generate a new certificate, along with new public and private keys:
$ openssl ecparam -genkey -name prime256v1 -noout -out new-fulcio.pem $ openssl ec -in new-fulcio.pem -pubout -out new-fulcio-public.pem $ openssl ec -in new-fulcio.pem -out new-fulcio.pass.pem -des3 -passout pass:"CHANGE_ME" $ openssl req -new -x509 -key new-fulcio.pass.pem -out new-fulcio.cert.pemReplace CHANGE_ME with a new password.
ImportantThe certificate and new keys must have unique file names.
Update the RHTAS Ansible playbook by adding the new private key file name, the new certificate content, and the password to the
tas_single_node_fulciovariable:tas_single_node_fulcio: root_ca: "{{ lookup('file', 'new-fulcio.cert.pem') }}" private_key: "{{ lookup('file', 'new-fulcio.pass.pem') }}" ca_passphrase: CHANGE_MEReplace CHANGE_ME with a new password.
NoteThe password here must match the password used for generating the new private and public keys.
NoteWe recommend sourcing the passphrase either from a file or encrypted by using Ansible Vault.
Configure The Update Framework (TUF) service to use the new Fulcio certificate.
Set up your shell environment:
$ export WORK="${HOME}/trustroot-example" $ export ROOT="${WORK}/root/root.json" $ export KEYDIR="${WORK}/keys" $ export INPUT="${WORK}/input" $ export TUF_REPO="${WORK}/tuf-repo" $ export TUF_URL="https://tuf.${BASE_HOSTNAME}" $ export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE $ export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE $ export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') $ export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]')Create a temporary TUF directory structure:
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"Download the TUF contents to the temporary TUF directory structure:
$ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" $ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"Find the active Fulcio certificate file name. Open the latest target file, for example,
1.targets.json, within the local TUF repository. In this file you will find the active Fulcio certificate file name, for example,fulcio_v1.crt.pem. Set an environment variable with this active Fulcio certificate file name:$ export ACTIVE_CERT_NAME=fulcio_v1.crt.pem
Get the active Fulico certificate from the managed node:
$ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:/etc/rhtas/certs/fulcio.pem "${ACTIVE_CERT_NAME}"Expire the old certificate:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-fulcio-target "$ACTIVE_CERT_NAME" \ --fulcio-uri "https://fulcio.rhtas" \ --fulcio-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Add the new Fulcio certificate:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-fulcio-target "new-fulcio.cert.pem" \ --fulcio-uri "https://fulcio.rhtas" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Create a compressed archive file of the updated TUF repository:
$ tar -C "${WORK}" -czvf repository.tar.gz tuf-repoUpdate the RHTAS Ansible playbook by adding the new compressed archive file content to the
tas_single_node_trust_rootvariable:tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"Delete the working directory:
$ rm -r $WORK
Run the RHTAS Ansible Playbook to apply the changes:
$ ansible-playbook -i inventory play.yml
Update the
cosignconfiguration with the updated TUF configuration:$ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts with the new Fulcio certificate.
Additional resources
2.3.4. Rotating the Timestamp Authority signer key and certificate chain
You can proactively rotate the Timestamp Authority (TSA) signer key and certificate chain. This procedure walks you through expiring your old TSA signer key and certificate chain, and replacing them with a new ones for Red Hat Trusted Artifact Signer (RHTAS) to use. Expiring your old TSA signer key and certificate chain still allows you to verify artifacts signed by the old key and certificate chain.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
-
A workstation with the
rsync,openssl, andcosignbinaries installed. - A SSH connection to the managed node, with root-level privileges on the managed node.
Procedure
Download the
tuftoolbinary from the local command-line interface (CLI) tool download page to your workstation.NoteThe URL address is the configured node as defined by the
tas_single_node_base_hostnamevariable. An example URL address would be,Content from cli-server.example.com is not included.https://cli-server.example.com, given that the value oftas_single_node_base_hostnameisexample.com.ImportantCurrently, the
tuftoolbinary is only available for Linux operating systems on the x86_64 architecture.- From the download page, go to the tuftool download section, and click the link for your platform.
Open a terminal on your workstation, decompress the binary
.gzfile, and set the execution bit:$ gunzip tuftool-amd64.gz $ chmod +x tuftool-amd64
Move and rename the binary to a location within your
$PATHenvironment:$ sudo mv tuftool-amd64 /usr/local/bin/tuftool
Generate a new certificate chain, and a new signer key.
ImportantThe new certificate and keys must have unique file names.
Create a temporary working directory:
$ mkdir certs && cd certs
Create the root certificate authority (CA) private key, and set a password:
$ openssl req -x509 -newkey rsa:2048 -days 365 -sha256 -nodes \ -keyout rootCA.key.pem -out rootCA.crt.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=RootCA/CN=RootCA" \ -addext "basicConstraints=CA:true" -addext "keyUsage=cRLSign, keyCertSign"Replace CHANGE_ME with a new password.
Create the intermediate CA private key and certificate signing request (CSR), and set a password:
$ openssl req -newkey rsa:2048 -sha256 \ -keyout intermediateCA.key.pem -out intermediateCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=IntermediateCA/CN=IntermediateCA"Replace CHANGE_ME with a new password.
Sign the intermediate CA certificate with the root CA:
$ openssl x509 -req -in intermediateCA.csr.pem -CA rootCA.crt.pem -CAkey rootCA.key.pem \ -CAcreateserial -out intermediateCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:true\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"Replace CHANGE_ME with the root CA private key password to sign the intermediate CA certificate.
Create the leaf CA private key and CSR, and set a password:
$ openssl req -newkey rsa:2048 -sha256 \ -keyout leafCA.key.pem -out leafCA.csr.pem \ -passout pass:"CHANGE_ME" \ -subj "/C=CC/ST=state/L=Locality/O=RH/OU=LeafCA/CN=LeafCA"Sign the leaf CA certificate with the intermediate CA:
$ openssl x509 -req -in leafCA.csr.pem -CA intermediateCA.crt.pem -CAkey intermediateCA.key.pem \ -CAcreateserial -out leafCA.crt.pem -days 365 -sha256 \ -extfile <(echo -e "basicConstraints=CA:false\nkeyUsage=cRLSign, keyCertSign\nextendedKeyUsage=critical,timeStamping") \ -passin pass:"CHANGE_ME"Replace CHANGE_ME with the intermediate CA private key password to sign the leaf CA certificate.
Create the certificate chain by combining the newly created certificates together:
$ cat leafCA.crt.pem intermediateCA.crt.pem rootCA.crt.pem > new-tsa.certchain.pem
Update the RHTAS playbook with the new certificate chain, private key, and password:
tas_single_node_tsa: certificate_chain: "{{ lookup('file', 'new-tsa.certchain.pem') }}" signer_private_key: "{{ lookup('file', 'leafCA.key.pem') }}" ca_passphrase: CHANGE_MEReplace CHANGE_ME with the leaf CA private key password.
NoteRed Hat recommends sourcing the passphrase either from a file or encrypted by using Ansible Vault.
Find your active TSA certificate file name, the TSA URL string, and configure your shell environment with these values:
$ export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE $ export ACTIVE_CERT_CHAIN_NAME=tsa.certchain.pem $ export TSA_URL=https://tsa.${BASE_HOSTNAME}/api/v1/timestamp $ curl $TSA_URL/certchain -o $ACTIVE_CERT_CHAIN_NAMEConfigure The Update Framework (TUF) service to use the new TSA certificate chain.
Set up your shell environment:
$ export WORK="${HOME}/trustroot-example" $ export ROOT="${WORK}/root/root.json" $ export KEYDIR="${WORK}/keys" $ export INPUT="${WORK}/input" $ export TUF_REPO="${WORK}/tuf-repo" $ export TUF_URL="https://tuf.${BASE_HOSTNAME}" $ export MANAGED_NODE_IP=IP_OF_ANSIBLE_MANAGED_NODE $ export MANAGED_NODE_SSH_USER=USER_TO_CONNECT_TO_MANAGED_NODE $ export NEW_CERT_CHAIN_NAME=new-tsa.certchain.pemCreate a temporary TUF directory structure:
$ mkdir -p "${WORK}/root/" "${KEYDIR}" "${INPUT}" "${TUF_REPO}"Download the TUF contents to the temporary TUF directory structure:
$ export REMOTE_KEYS_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-signing-keys" | tr -d '[:space:]') $ export REMOTE_TUF_VOLUME=$(ssh ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP} -t "sudo podman volume mount tuf-repository" | tr -d '[:space:]') $ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_KEYS_VOLUME}/" "${KEYDIR}" $ rsync -r --rsync-path="sudo rsync" ${MANAGED_NODE_SSH_USER}@${MANAGED_NODE_IP}:"${REMOTE_TUF_VOLUME}/" "${TUF_REPO}" $ cp "${TUF_REPO}/root.json" "${ROOT}"Expire the old TSA certificate:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-tsa-target "$ACTIVE_CERT_CHAIN_NAME" \ --tsa-uri "$TSA_URL" \ --tsa-status "Expired" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Add the new TSA certificate:
$ tuftool rhtas \ --root "${ROOT}" \ --key "${KEYDIR}/snapshot.pem" \ --key "${KEYDIR}/targets.pem" \ --key "${KEYDIR}/timestamp.pem" \ --set-tsa-target "$NEW_CERT_CHAIN_NAME" \ --tsa-uri "$TSA_URL" \ --outdir "${TUF_REPO}" \ --metadata-url "file://${TUF_REPO}"Create a compressed archive file of the updated TUF repository:
$ tar -C "${WORK}" -czvf repository.tar.gz tuf-repoUpdate the RHTAS Ansible playbook by adding the new compressed archive file name to the
tas_single_node_trust_rootvariable:tas_single_node_trust_root: full_archive: "{{ lookup('file', 'repository.tar.gz') | b64encode }}"Delete the working directory:
$ rm -r $WORK
Run the RHTAS Ansible Playbook to apply the changes:
$ ansible-playbook -i inventory play.yml
Update the
cosignconfiguration with the updated TUF configuration:$ cosign initialize --mirror=$TUF_URL --root=$TUF_URL/root.json
Now, you are ready to sign and verify your artifacts that uses the new TSA signer key, and certificate.
Additional resources
2.4. Using your own certificate authority bundle
You can bring your organization’s certificate authority (CA) bundle for signing and verifying your build artifacts with Red Hat’s Trusted Artifact Signer (RHTAS) service.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux managed by Ansible.
- Your CA root certificate.
Procedure
- Open the RHTAS Ansible Playbook for editing.
Under the
tas_single_node_fulciosection, update thetrusted_cawith your custom CA bundle file:... tas_single_node_fulcio: trusted_ca: "{{ lookup('file', 'ca-bundle.crt') }}" ...ImportantThe certificate filename must be
ca-bundle.crt.- Save, and quit the editor.
Run the RHTAS Ansible Playbook to apply the changes:
$ ansible-playbook -i inventory play.yml
Chapter 3. Verifying signed artifacts in an offline environment
In some cases, you need to verify an artifact’s authenticity, but do not have access to the Red Hat Trusted Artifact Signer (RHTAS) service that signed that artifact. In these cases, you can still verify an artifact’s signature by doing an offline verification.
Before you can start doing offline artifact verification, you need access to the RHTAS signing environment, and access to an image registry. In the offline environment, you only need access to the same image registry as the signing environment.
Prerequisites
- Installation of RHTAS running either on the Red Hat OpenShift Container Platform, or on Red Hat Enterprise Linux (RHEL) managed by Ansible.
-
A workstation with the
cosign,tuftool,tar, andsha256sumbinaries installed. -
Initialization of
cosignwith the current signing environment.
Procedure
In the signing environment, do the following steps:
Sign an image by using
cosign:cosign sign IMAGE_NAME:TAG$ cosign sign -y ttl.sh/rhtas/example-image:1h
Get the Trust Root URL.
For RHTAS deployments on Red Hat Enterprise Linux:
$ export BASE_HOSTNAME=BASE_HOSTNAME_OF_RHTAS_SERVICE $ export TUF_SERVER_URL=https://tuf.${BASE_HOSTNAME}For RHTAS deployments on Red Hat OpenShift Container Platform:
$ export TUF_SERVER_URL="$(oc get tuf -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer)"
Create a clone of the Trust Root locally:
$ export TUF_REPOSITORY="${HOME}/repository" $ tuftool clone --allow-root-download --metadata-dir "${TUF_REPOSITORY}" --targets-dir "${TUF_REPOSITORY}/targets" --metadata-url "${TUF_SERVER_URL}" --targets-url "${TUF_SERVER_URL}/targets"Create a compressed archive file of the Trust Root:
$ tar -czvf repository.tar.gz "${TUF_REPOSITORY}" $ sha256sum repository.tar.gzMake note of the checksum output for use later in the offline environment.
Copy the compressed archive file to the offline environment.
ImportantYou must copy the Trust Root compressed archive file every time you update The Update Framework (TUF) metadata files or when you rotate any RHTAS component keys and certificates.
In the offline environment, do the following steps:
- Change directory to where you copied the compressed archive file of the Trust Root.
Verify the checksum by using the checksum value from the signing environment:
$ echo "SHA256_CHECKSUM repository.tar.gz" > checksum.txt $ sha256sum --check checksum.txt || echo "Archive integrity compromised, don't continue with the procedure\!"ImportantOnly continue if the integrity check is successful.
Expand the compressed archive file:
$ tar -xzvf repository.tar.gz
Initialize
cosign:$ cd repository/ $ cosign initialize --mirror=file://$(pwd)/ --root=$(pwd)/1.root.json
Verify the signed artifacts:
$ export IMAGE="IMAGE_NAME:TAG" $ export SIGNING_EMAIL_ADDR=SIGNING_EMAIL_ADDRESS $ export SIGNING_OIDC_ISSUER=OIDC_ISSUER_URL $ cosign verify --certificate-identity="${SIGNING_EMAIL_ADDR}" --certificate-oidc-issuer="${SIGNING_OIDC_ISSUER}" "${IMAGE}"
Chapter 4. Verifying Red Hat signatures
You can use Red Hat Trusted Artifact Signer (RHTAS) to verify the authenticity of Red Hat’s products, and artificial intelligence (AI) generated Granite models.
Prerequisites
- Installation of RHTAS running on Red Hat Enterprise Linux or Red Hat OpenShift Container Platform.
- Access to the Red Hat’s Customer Portal for downloading product signing keys.
-
A workstation with the
cosignbinary installed, version 2.2 or later.
Procedure
- Download Red Hat’s product signing keys from the Customer Portal for the products you want to verify. This downloads a text file containing Red Hat’s public key signature.
Open a terminal on your workstation. Download the Rekor public key, and create a new
rekor.pemfile:$ curl https://REKOR_HOSTNAME/api/v1/log/publicKey > rekor.pemCreate a new cosign public key from the Red Hat product signing key:
$ cat 63405576.txt > cosign.pub
Configure your shell environment for cosign to use the new Rekor public key:
$ export SIGSTORE_REKOR_PUBLIC_KEY=rekor.pem
Verify a Red Hat signed image by using the cosign public key:
cosign verify --key cosign.pub IMAGE_NAME:TAG$ cosign verify --key cosign.pub registry.redhat.io/rhelai1/granite-3.1-8b-starter-v1:latest
Appendix A. Restore owner references script
This Bash script is for restoring the ownerReferences when restoring Red Hat Trusted Artifact Signer (RHTAS) data to a different OpenShift cluster.
#!/bin/bash
# List of resources to check
RESOURCES=("Fulcio" "Rekor" "Trillian" "TimestampAuthority" "CTlog" "Tuf")
function validate_owner() {
local RESOURCE=$1
local ITEM=$2
local OWNER_NAME=$3
# Check all the labels exist and are the same
LABELS=("app.kubernetes.io/instance" "app.kubernetes.io/part-of" "velero.io/backup-name" "velero.io/restore-name")
for LABEL in "${LABELS[@]}"; do
PARENT_LABEL=$(oc get Securesign "$OWNER_NAME" -o json | jq -r ".metadata.labels[\"$LABEL\"]")
CHILD_LABEL=$(oc get $RESOURCE "$ITEM" -o json | jq -r ".metadata.labels[\"$LABEL\"]")
if [[ -z "$CHILD_LABEL" || $CHILD_LABEL == "null" ]]; then
echo " $LABEL label missing in $RESOURCE"
return 1
elif [[ -z "$PARENT_LABEL" || $PARENT_LABEL == "null" ]]; then
echo " $LABEL label missing in Securesign"
return 1
elif [[ "$CHILD_LABEL" != "$PARENT_LABEL" ]]; then
echo " $LABEL labels not matching: $CHILD_LABEL != $PARENT_LABEL"
return 1
fi
done
return 0
}
for RESOURCE in "${RESOURCES[@]}"; do
echo "Checking $RESOURCE ..."
# Get all resources missing ownerReferences
MISSING_REFS=$(oc get $RESOURCE -o json | jq -r '.items[] | select(.metadata.ownerReferences == null) | .metadata.name')
for ITEM in $MISSING_REFS; do
echo " Missing ownerReferences in $RESOURCE/$ITEM"
# Find the expected owner based on labels
OWNER_NAME=$(oc get $RESOURCE "$ITEM" -o json | jq -r '.metadata.labels["app.kubernetes.io/name"]')
if [[ -z "$OWNER_NAME" || "$OWNER_NAME" == "null" ]]; then
echo " Skipping $RESOURCE/$ITEM: name not found in labels"
continue
fi
if ! validate_owner $RESOURCE $ITEM $OWNER_NAME; then
echo " Skipping ..."
continue
fi
# Try to get the owner's UID from Securesign
OWNER_UID=$(oc get Securesign "$OWNER_NAME" -o jsonpath='{.metadata.uid}' 2>/dev/null)
if [[ -z "$OWNER_UID" || "$OWNER_UID" == "null" ]]; then
echo " Failed to find Securesign/$OWNER_NAME UID, skipping ..."
continue
fi
echo " Found owner: Securesign/$OWNER_NAME (UID: $OWNER_UID)"
# Patch the object with the restored ownerReference
oc patch $RESOURCE "$ITEM" --type='merge' -p "{
\"metadata\": {
\"ownerReferences\": [
{
\"apiVersion\": \"rhtas.redhat.com/v1alpha1\",
\"kind\": \"Securesign\",
\"name\": \"$OWNER_NAME\",
\"uid\": \"$OWNER_UID\",
\"controller\": true,
\"blockOwnerDeletion\": true
}
]
}
}"
echo "Restored ownerReferences for $RESOURCE/$ITEM"
done
done
echo "Done"