How to dump resources, status, and logs of AMQ Streams on OpenShift
Environment
- Red Hat AMQ Streams on OpenShift
Issue
- How to dump resources, status, and logs of AMQ Streams on OpenShift?
Resolution
It is possible to use a reporting bash script to get a bulk collection of data (excluding secret keys).
curl -sLk "https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/main/tools/report.sh" | bash -s -- --namespace=[CLUSTER NAMEASPACE] --cluster=[CLUSTER NAME] --out-dir=[local folder like `~/Downloads`]
If you have deployments like Kafka MirrorMaker2, Kafka Connect and Kafka Bridge you need to add additional switch --mm2, --connect and --bridge to the above command
curl -sLk "https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/main/tools/report.sh" | bash -s -- --namespace=[CLUSTER NAMEASPACE] --cluster=[CLUSTER NAME] --mm2=[MIRRORMAKER2 NAME] --out-dir=[local folder like `~/Downloads`]
curl -sLk "https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/main/tools/report.sh" | bash -s -- --namespace=[CLUSTER NAMEASPACE] --cluster=[CLUSTER NAME] --connect=[CONNECT NAME] --out-dir=[local folder like `~/Downloads`]
curl -sLk "https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/main/tools/report.sh" | bash -s -- --namespace=[CLUSTER NAMEASPACE] --cluster=[CLUSTER NAME] --bridge=[BRIDGE NAME] --out-dir=[local folder like `~/Downloads`]
Note that this script requires bash 4+ and has been tested only on GNU/Linux and MacOS with GNU utils.
Documentation
For more information on using the script, see "Chapter 28. Retrieving diagnostic and troubleshooting data" in the product documentation.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.