Deploying Metrics Store on an existing OpenShift Container Platform deployment
Prerequisites
- An OCP cluster deployed on top of a Red Hat Virtualization environment.
- OpenShift Logging (EFK stack) is not deployed yet. If you run
oc get pods -n openshift-logging, you should not see any pods. - A single unpartitioned disk, such as /dev/vde, on the
master0node.
Procedure
Execute the following procedure as a root user on the machine used to install the OCP cluster.
Prepare Inventories and SSH Access
-
Add the following entries to the inventory that you used for deployment of your OCP cluster, under the [all:vars] section.
OpenShift logging tovirt_env_name=<virtual machine name> openshift_master_identity_providers=[{'mappingMethod': 'lookup', 'challenge': 'true', 'login': 'true', 'kind': 'AllowAllPasswordIdentityProvider', 'name': 'allow_all'}] openshift_logging_es_nodeselector={'node-role.kubernetes.io/master': 'true'} openshift_logging_es_cluster_size=1 openshift_logging_es_number_of_replicas=0 openshift_logging_install_logging=true openshift_logging_es_allow_external=true openshift_logging_use_mux=true openshift_logging_mux_allow_external=true openshift_logging_mux_file_buffer_storage_type=hostmount openshift_logging_elasticsearch_storage_type=hostmount openshift_logging_elasticsearch_hostmount_path=/var/lib/elasticsearch Public URL for OpenShift UI access openshift_logging_master_public_url="https://{{ openshift_public_hostname }}:8443" Public hostname for Kibana browser access openshift_logging_kibana_hostname="kibana.{{ public_hosted_zone }}" the public hostname for Elasticsearch direct API access openshift_logging_es_hostname="es.{{ public_hosted_zone }}" openshift_logging_mux_hostname="mux.{{ public_hosted_zone }}" openshift_logging_es_memory_limit=16Gi openshift_logging_es_cpu_request=1 openshift_logging_use_ops=false openshift_logging_mux_namespaces=[] openshift_cluster_monitoring_operator_install=false openshift_metrics_install_metrics=false openshift_logging_mux_namespaces=["ovirt-metrics-{{ ovirt_env_name }}","ovirt-logs-{{ ovirt_env_name }}"] -
If you don’t use DNS resolution, add the address for
master0to/etc/hosts.# echo "<master0_ip> master0.<virtual machine name>.ocp.rhev.lab.eng.brq.redhat.com" >> /etc/hosts -
Add the
master0entries to the host group part of the inventory.
For example:[OSEv3:children] nodes masters etcd lb [masters] master0.<virtual machine name>.ocp.rhev.lab.eng.brq.redhat.com [nodes] master0.<virtual machine name>.ocp.rhev.lab.eng.brq.redhat.com openshift_node_group_name=node-config-master [etcd] [lb] -
Generate an SSH public key for the root user (make sure it does not require a password).
# ssh-keygen -
Copy the SSH public key to
master0.# ssh-copy-id master0.<virtual machine name>.ocp.rhev.lab.eng.brq.redhat.com -
Log in using SSH to
master0to prepare persistent storage (hostmount in this example).
For more details see: Aggreagated Logging
or Content from github.com is not included.Persistent Storage.
Partition and Mount
-
Create a new partition on the disk.
# gdisk /dev/vde -
Create a file system in the new partition.
# mkfs.ext4 /dev/vde1 -
Prepare the mountpoint.
# mkdir /var/lib/elasticsearch -
Mount the file system (optionally, add the entry to /etc/fstab).
# mount /dev/vde1 /var/lib/elastichsearch
Grant Permissions
-
Grant additional permissions.
# chgrp 65534 /var/lib/elasticsearch # chmod -R 0770 /var/lib/elasticsearch # semanage fcontext -a -t container_file_t "/var/lib/elasticsearch(/.*)?" # restorecon -R -v /var/lib/elasticsearch -
Grant the Elasticsearch service account the privilege to mount and edit a local volume.
# oc project openshift-logging # oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:openshift-logging:aggregated-logging-elasticsearch
Deploy Logging
-
Log in using SSH to the OCP cluster machine and run the playbook that deploys OpenShift Logging
(don't forget to includeansbile-vaultand othervarsfiles you are using).ANSIBLE_LOG_PATH="logging.log" ansible-playbook -i openshift_3_11.hosts -e @secure_vars.yaml -e @vars.yaml /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml --ask-vault-pass. -
Once the playbook finishes, log in using SSH to
master0, and check that the deployment was successful.# oc get podsYou should see something like this output example:
NAME | READY | STATUS | RESTARTS | AGE
-|-|-|-|-
logging-es-data-master-pixn91nj-1-l9njq | 2/2 | Running | 0 | 42m
logging-fluentd-7lpmh | 1/1 | Running | 0 | 43m
logging-kibana-1-zbgjb | 2/2 | Running | 0 | 45m
logging-mux-1-bgcsl | 1/1 | Running | 0| 43m
Configuration
-
Enable External Elasticsearch Access.
See details here Content from github.com is not included.Enabling External Elasticsearch Access -
Obtain the IP address of your OCP node.
# hostname -i -
Patch the logging-es service.
# oc patch svc logging-es -p '{"spec":{"externalIPs": ["10.37.140.15"]}}' -
Edit your curator configmap to include a section like this
(replacewith your ovirt_env_name): config.yaml: | ovirt-metrics-<virtual machine name>: delete: days: 3 # oc edit cm/logging-curator -
Log in using SSH to the Red Hat Virtualization engine machine.
-
Create a new config file.
# cp /etc/ovirt-engine-metrics/config.yml.example /etc/ovirt-engine-metrics/config.yml.d/config.yml -
Replace the default ovirt_env_name in the config file with your own.
# sed -i "s|^\(ovirt_env_name\s*:\s*\).*\$|\1<virtual machine name>|" /etc/ovirt-engine-metrics/config.yml.d/config.yml -
Log in using SSH to the Metrics Store node
-
Add logging routes to
/etc/hosts
(if you used an all-in-one OCP installation, you should already have an entry for the OCP node).# truncate -s -1 /etc/hosts && for host in $(oc get routes -n openshift-logging -o custom-columns=HOST:.spec.host --no-headers=true); do echo -n " $host" >> /etc/hosts; done -
Log in using SSH to the Red Hat Virtualization engine machine.
-
Add the line with the OCP node from
/etc/hoststo the engine's/etc/hosts.ssh root@xx.xx.xx.xx tail -n 1 /etc/hosts >> /etc/hosts # ESHOST=$(grep -o " es.*\b " /etc/hosts | tr -d [:space:]) # sed -i "s|^\(elasticsearch_host\s*:\s*\).*\$|\1 $ESHOST|" /etc/ovirt-engine-metrics/config.yml.d/config.yml -
Log in using SSH to the Red Hat Virtualization engine machine.
-
Generate a public SSH key and copy it to the Metrics Store virtual machine.
# mytemp=$(mktemp -d) # cp /etc/pki/ovirt-engine/keys/engine_id_rsa $mytemp # ssh-keygen -y -f $mytemp/engine_id_rsa > $mytemp/engine_id_rsa.pub # ssh-copy-id -i $mytemp/engine_id_rsa.pub root@$ESHOST # rm -rf $mytemp echo "" > metrics.log && /usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_machines_for_metrics.sh -vvv | tee metrics.log
Configure Kibana
-
When the playbook successfully finishes, log in using SSH to the Metrics Store virtual machine.
-
Grant administrator privileges to the user that connects to the Kibana UI (in this example - developer).
oc adm policy add-cluster-role-to-user cluster-admin developer -
Connect as developer to the Kibana UI.
kibana.<virtual machine name>.ocp.rhev.lab.eng.brq.redhat.com -
Go to
Management > Index Patternsand add these two patterns:project.ovirt-logs-<virtual machine name>.* project.ovirt-metrics-<virtual machine name>.*