How to create custom Metering reports based on project labels
Environment
- Red Hat OpenShift Container Platform (RHOCP)
- 4.x
Issue
- We've added several billing codes as labels to our projects and we want to aggregate these into the reports generated by
openshift-metering. - Our cluster is fully disconnected and we can't make use of This content is not included.Red Hat Cost Management Solution.
Resolution
Important: Note that Openshift Metering Operator is deprecated in OCP 4.6
Assuming you already have the project labels in place, for example:
$ oc label namespace user-metering budget_code=test
How to create the custom setup
- Create a
ReportDataSourceprometheusMetricsImportertype, for example:
$ cat << EOF > namespace-labels-datasource.yaml
apiVersion: metering.openshift.io/v1
kind: ReportDataSource
metadata:
name: namespace-labels
namespace: openshift-metering
labels:
user: custom
spec:
prometheusMetricsImporter:
query: >
sum(kube_namespace_labels{namespace !~ "openshift.*"}) by (namespace,
label_budget_code)
EOF
$ oc create -f namespace-labels-datasource.yaml
- Create a
ReportQueryto query the new data created in previous step:
$ cat << EOF > namespace-labels-query.yaml
apiVersion: metering.openshift.io/v1
kind: ReportQuery
metadata:
name: namespace-labels
namespace: openshift-metering
labels:
user: custom
spec:
columns:
- name: namespace
type: varchar
- name: budget_code
type: varchar
- name: labels
tableHidden: true
type: map<varchar, varchar>
- name: timeprecision
type: double
unit: seconds
- name: timestamp
type: timestamp
unit: date
- name: dt
type: varchar
query: >
SELECT
coalesce(element_at(labels, 'namespace'), '') as namespace,
coalesce(element_at(labels, 'label_budget_code'), '') as budget_code,
labels,
timeprecision,
"timestamp",
dt
FROM hive.metering.datasource_openshift_metering_namespace_labels
EOF
$ oc create -f namespace-labels-query.yaml
- Create a
ReportDataSourceof typereportQueryViewusing thereportQueryfrom previous step:
$ cat << EOF > namespace-labels-view.yaml
apiVersion: metering.openshift.io/v1
kind: ReportDataSource
metadata:
name: namespace-labels-view
namespace: openshift-metering
labels:
user: custom
spec:
reportQueryView:
queryName: namespace-labels
EOF
$ oc create -f namespace-labels-view.yaml
- Create an aggregation
ReportQuery:
$ cat << EOF > budget-code-namespace-cpu-usage.yaml
apiVersion: metering.openshift.io/v1
kind: ReportQuery
metadata:
name: budget-code-namespace-cpu-usage
namespace: openshift-metering
labels:
user: custom
spec:
columns:
- name: period_start
type: timestamp
unit: date
- name: period_end
type: timestamp
unit: date
- name: budget_code
type: varchar
- name: namespace
type: varchar
unit: kubernetes_namespace
- name: pod_usage_cpu_core_seconds
type: double
unit: cpu_core_seconds
inputs:
- name: ReportingStart
type: time
- name: ReportingEnd
type: time
- default: pod-cpu-usage-raw
name: PodCpuUsageRawDataSourceName
type: ReportDataSource
- default: namespace-labels-view
name: NamespaceLabelsName
type: ReportDataSource
query: |
SELECT
timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart| prestoTimestamp |}' AS period_start,
timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' AS period_end,
labels.budget_code,
pod.namespace,
sum(pod.pod_usage_cpu_core_seconds) as pod_usage_cpu_core_seconds
FROM {| dataSourceTableName .Report.Inputs.PodCpuUsageRawDataSourceName |} pod,
{| dataSourceTableName .Report.Inputs.NamespaceLabelsName |} labels
WHERE pod.timestamp >= timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart | prestoTimestamp |}'
AND pod.timestamp < timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}'
AND pod.dt >= '{| default .Report.ReportingStart .Report.Inputs.ReportingStart | prometheusMetricPartitionFormat |}'
AND pod.dt <= '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prometheusMetricPartitionFormat |}'
AND pod.namespace = labels.namespace
GROUP BY labels.budget_code, pod.namespace
EOF
$ oc create -f budget-code-namespace-cpu-usage.yaml
- Create the custom
Report, for example:
$ cat << EOF > report-budget-code-namespace-cpu-usage-2020.yaml
apiVersion: metering.openshift.io/v1
kind: Report
metadata:
name: budget-code-namespace-cpu-usage-2020
namespace: openshift-metering
labels:
user: custom
spec:
query: budget-code-namespace-cpu-usage
reportingEnd: "2020-12-30T23:59:59Z"
reportingStart: "2020-01-01T00:00:00Z"
runImmediately: true
EOF
$ oc create -f report-budget-code-namespace-cpu-usage-2020.yaml
How to get the report
- Create a project to run the jobs, for example:
$ oc new-project user-metering
- Create the necessary
NetworkPolicyto allow access toopenshift-meteringnamespace:
$ cat << EOF > allow-from-user-metering.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-user-metering
namespace: openshift-metering
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
metering.user/scrape: "true"
podSelector:
matchLabels:
app: reporting-operator
policyTypes:
- Ingress
EOF
$ oc create -f allow-from-user-metering.yaml
- Create a proper
ServiceAccount, for example:
$ oc create sa report-runner -n user-metering
$ oc adm policy add-role-to-user report-exporter -z report-runner -n openshift-metering --role-namespace=openshift-metering --rolebinding-name=user-role-exporters
- Create the
Jobusing the following template, for example:
$ cat << EOF > jobreporting.yaml
apiVersion: template.openshift.io/v1
kind: Template
labels:
template: reporting
metadata:
name: reporting
objects:
- apiVersion: batch/v1
kind: Job
metadata:
name: reporting
spec:
activeDeadlineSeconds: 3600
backoffLimit: 6
completions: 1
parallelism: 1
template:
metadata:
name: reporting
labels:
job: reporting
spec:
serviceAccountName: report-runner
containers:
- name: ose-cli
image: registry.redhat.io/rhel7/support-tools
command:
- /bin/bash
- -c
-
#!/bin/bash
token=$(cat /run/secrets/kubernetes.io/serviceaccount/token)
curl --insecure -H "Authorization: Bearer ${token}" "https://reporting-operator.openshift-metering.svc.cluster.local:8080/api/v1/reports/get?name=${REPORT_NAME}&namespace=openshift-metering&format=$REPORT_FORMAT"
env:
- name: REPORT_NAME
value: ${REPORT_NAME}
- name: REPORT_FORMAT
value: ${REPORT_FORMAT}
restartPolicy: Never
parameters:
- description: REPORT_NAME to get
name: REPORT_NAME
required: true
- name: REPORT_FORMAT
description: REPORT_FORMAT to use, csv, json or tabular
required: true
value: "csv"
EOF
$ oc process -f jobreporting.yaml -p REPORT_NAME="budget-code-namespace-cpu-usage-2020" | oc create -f -
Root Cause
Official This page is not included, but the link has been rewritten to point to the nearest parent document.Metering documentation has no guidelines for doing this.
Diagnostic Steps
To debug PrestoSQL and try out your new queries follow this Content from github.com is not included.guideline.
NOTE: Instead of PrestoSQL, hive-metastore can also work with external MySQL or PostgreSQL if needed, check this doc for more details.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.