How to customize Puppet Configuration Data with hiera in Openstack Director
Environment
Red Hat Enterprise Linux Openstack Platform 7.
Red Hat Openstack Plaform 8.
Red Hat Openstack Plaform 9.
Red Hat Openstack Plaform 10.
Red Hat Openstack Plaform 11.
Red Hat Openstack Plaform 12.
Red Hat Openstack Plaform 13.
Issue
The Openstack This content is not included.installation and usage guide explains how to customize puppet configuration data. However, how can one find out all available parameters?
Resolution
Customizing Puppet Configuration Data
The Heat template collection contains a set of parameters to pass extra configuration to certain node types. These parameters need to be added to the templates' parameter_defaults: section in one of the environment files, for example network_environment.yaml. These parameters save the configuration as hieradata for the node's Puppet configuration:
ExtraConfig
Configuration to add to all nodes.
controllerExtraConfig # for OSP < 10
Configuration to add to all Controller nodes.
ControllerExtraConfig # for OSP >= 10
Configuration to add to all Controller nodes.
NovaComputeExtraConfig # for OSP < 12
Configuration to add to all Compute nodes.
ComputeExtraConfig # for OSP >= 13
Configuration to add to all Compute nodes.
BlockStorageExtraConfig
Configuration to add to all Block Storage nodes.
ObjectStorageExtraConfig
Configuration to add to all Object Storage nodes
CephStorageExtraConfig
Configuration to add to all Ceph Storage nodes
To add extra configuration to the post-deployment configuration process, create an environment file that contains these parameters in the parameter_defaults section. For example, to increase the reserved memory for Compute hosts to 1024 MB and set the VNC keymap to Japanese:
parameter_defaults:
NovaComputeExtraConfig:
nova::compute::reserved_host_memory: 1024
nova::compute::vnc_keymap: ja
Include this environment file when running openstack overcloud deploy.
Important
>It is only possible to define each parameter once. Subsequent usage overrides previous values.
hiera hierarchy
Hiera will look up values in it's hierarchy, from top to bottom, in /etc/puppet/hieradata
E.g., the first file to be searched is: /etc/puppet/hieradata/"%{::uuid}".yaml, followed by /etc/puppet/hieradata/heat_config_%{::deploy_config_name}.yaml, followed by /etc/puppet/hieradata/controller_extraconfig.yaml, ...
[root@overcloud-controller-0 puppet]# cat /etc/hiera.yaml
---
:backends:
- json
- yaml
:json:
:datadir: /etc/puppet/hieradata
:yaml:
:datadir: /etc/puppet/hieradata
:hierarchy:
- "%{::uuid}"
- heat_config_%{::deploy_config_name}
- controller_extraconfig
- extraconfig
- controller
- database
- object
- swift_devices_and_proxy
- ceph_cluster
- ceph
- bootstrap_node
- all_nodes
- vip_data
- RedHat
- common
- cinder_netapp_data
- neutron_cisco_data
- cisco_n1kv_data
- neutron_bigswitch_data
- neutron_nuage_data
hiera automatic parameter lookup
The Content from docs.puppet.com is not included.official documentation for hiera explains how automatic parameter lookup works. What is important to know in order to understand this guide is the following: "Puppet will automatically retrieve class parameters from Hiera, using lookup keys like myclass::parameter_one."
Mapping heat's ...Config parameters to /etc/hiera/
It may be helpful to understand the mapping of Director's configuration data to the respective hiera datafiles.
cat /usr/share/openstack-tripleo-heat-templates/puppet/controller-puppet.yaml
(...)
# Map heat metadata into hiera datafiles
ControllerConfig:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
hiera:
hierarchy:
- '"%{::uuid}"'
- heat_config_%{::deploy_config_name}
- controller_extraconfig
- extraconfig
- controller
- database
- object
- swift_devices_and_proxy # provided by SwiftDevicesAndProxyConfig
- ceph_cluster # provided by CephClusterConfig
- ceph
- bootstrap_node # provided by BootstrapNodeConfig
- all_nodes # provided by allNodesConfig
- vip_data # provided by vip-config
- RedHat # Workaround for https://bugzilla.redhat.com/show_bug.cgi?id=1236143
- common
- cinder_netapp_data # Optionally provided by ControllerExtraConfigPre
- neutron_cisco_data # Optionally provided by ControllerExtraConfigPre
- cisco_n1kv_data # Optionally provided by ControllerExtraConfigPre
- neutron_bigswitch_data # Optionally provided by ControllerExtraConfigPre
- neutron_nuage_data # Optionally provided by ControllerExtraConfigPre
datafiles:
controller_extraconfig:
mapped_data: {get_param: ControllerExtraConfig}
extraconfig:
mapped_data: {get_param: ExtraConfig}
(...)
Thus, it's possible to determine the exact mapping:
controllerExtraConfig => controller_extraconfig
NovaComputeExtraConfig => compute_extraconfig
CephStorageExtraConfig => ceph_extraconfig
ObjectStorageExtraConfig => object_extraconfig
BlockStorageExtraConfig => volume_extraconfig
ExtraConfig => extraconfig
Methods to find available parameters:
Looking up parameters on hosts
The easiest, but incomplete, way of finding these parameters is to go to an existing node and to verify contents in /etc/puppet/hieradata/ - however, not all parameters may be set here, although this is a good start to find the most common hiera parameters.
[root@overcloud-controller-0 hieradata]# grep -i debug * -Ri
controller.yaml:ceilometer::debug:
controller.yaml:cinder::debug:
controller.yaml:glance::api::debug:
controller.yaml:glance::registry::debug:
controller.yaml:heat::debug:
controller.yaml:horizon::django_debug:
controller.yaml:keystone::debug:
controller.yaml:neutron::debug:
controller.yaml:nova::debug:
[root@overcloud-controller-0 hieradata]# grep -i neutron * -R
(...)
controller.yaml:neutron::enable_dhcp_agent: true
controller.yaml:neutron::enable_l3_agent: true
controller.yaml:neutron::enable_metadata_agent: true
(...)
[root@overcloud-controller-0 hieradata]# grep ml2 * -R
controller.yaml:neutron::agents::ml2::ovs::enable_tunneling: True
controller.yaml:neutron::agents::ml2::ovs::local_ip: 172.16.0.6
controller.yaml:neutron::core_plugin: ml2
controller.yaml:neutron::plugins::ml2::network_vlan_ranges: ['datacentre:1:1000']
controller.yaml:neutron::plugins::ml2::tunnel_id_ranges: ['1:1000']
controller.yaml:neutron::plugins::ml2::type_drivers: ['vxlan','vlan','flat','gre']
controller.yaml:neutron::plugins::ml2::vni_ranges: ['1:1000']
Finding parameters from /etc/puppet/modules
As already stated above: "Puppet will automatically retrieve class parameters from Hiera, using lookup keys like myclass::parameter_one." Content from docs.puppet.com is not included.Automatic Parameter Lookup
This means that this is the syntax for parameters:
<class>:<param>: <value
where you can find <class> and <param> like this in puppet modules
class <classname> (<param>)
e.g. for nova:
cat /etc/puppet/modules/nova/manifests/init.pp
(...)
class nova(
$ensure_package = 'present',
$database_connection = false,
$slave_connection = false,
$database_idle_timeout = 3600,
$rpc_backend = 'rabbit',
$image_service = 'nova.image.glance.GlanceImageService',
# these glance params should be optional
# this should probably just be configured as a glance client
$glance_api_servers = 'localhost:9292',
$memcached_servers = false,
$rabbit_host = 'localhost',
$rabbit_hosts = false,
$rabbit_password = 'guest',
$rabbit_port = '5672',
$rabbit_userid = 'guest',
$rabbit_virtual_host = '/',
$rabbit_use_ssl = false,
$rabbit_heartbeat_timeout_threshold = 0,
$rabbit_heartbeat_rate = 2,
$rabbit_ha_queues = undef,
$kombu_ssl_ca_certs = undef,
$kombu_ssl_certfile = undef,
$kombu_ssl_keyfile = undef,
$kombu_ssl_version = 'TLSv1',
$amqp_durable_queues = false,
$qpid_hostname = 'localhost',
$qpid_port = '5672',
$qpid_username = 'guest',
$qpid_password = 'guest',
$qpid_sasl_mechanisms = false,
$qpid_heartbeat = 60,
$qpid_protocol = 'tcp',
$qpid_tcp_nodelay = true,
$auth_strategy = 'keystone',
$service_down_time = 60,
$log_dir = '/var/log/nova',
$state_path = '/var/lib/nova',
$lock_path = $::nova::params::lock_path,
$verbose = false,
$debug = false,
$periodic_interval = '60',
(...)
Which means that you can set
parameter_defaults:
controllerExtraConfig:
nova::debug: true
nova::auth_strategy: 'keystone'
cat neutron::plugins::ml2::network_vlan_ranges
can be found like this:
/etc/puppet/modules/neutron/manifests/plugins/ml2.pp
class neutron::plugins::ml2 (
$type_drivers = ['local', 'flat', 'vlan', 'gre', 'vxlan'],
$tenant_network_types = ['local', 'flat', 'vlan', 'gre', 'vxlan'],
$mechanism_drivers = ['openvswitch', 'linuxbridge'],
$flat_networks = ['*'],
$network_vlan_ranges = ['physnet1:1000:2999'],
$tunnel_id_ranges = ['20:100'],
$vxlan_group = '224.0.0.1',
$vni_ranges = ['10:100'],
$enable_security_group = true,
$package_ensure = 'present',
$supported_pci_vendor_devs = ['15b3:1004', '8086:10ca'],
$sriov_agent_required = false,
) {
Which means that you can set
parameter_defaults:
controllerExtraConfig:
neutron::plugins::ml2::enable_security_group: false
How to set configuration of puppet classes not automatically included by tripleo/Director
tripleo/Director does not automatically include all puppet classes when it executes puppet. If one sets a parameter of a puppet class, and this class is not automatically included by tripleo/Director, then the parameters which one passes as hieradata will have no impact. They will only appear in /etc/puppet/hieradata/ on the nodes, but will not be applied by puppet due to the missing class. In order to apply these parameters, make sure to include the missing puppet class in the ExtraConfig definition.
For example in order to change CPU allocation ratio parameter on compute nodes:
parameter_defaults:
NovaComputeExtraConfig:
nova::scheduler::filter::cpu_allocation_ratio: '11.0'
compute_classes:
- '::nova::scheduler::filter'
Note: the above is only an example and will not have any effect, as the
cpu_allocation_ratioparameter needs to be configured on the controllers. Please refer to Update nova.conf on controllers to allow oversubscription of RAM and CPU for more information about an update of the allocation ratios.
The compute_classes data is included via the hiera_include in the overcloud_compute.pp puppet manifest.
As another example, to change nova's quotas in /etc/nova/nova.conf
parameter_defaults:
controllerExtraConfig:
nova::quota::quota_instances: 7
controller_classes:
- ::nova::quota
Or in order to change neutron's quotas
parameter_defaults:
controllerExtraConfig:
neutron::config::server_config:
quotas/quota_port:
value: '-1'
quotas/quota_subnet:
value: '-1'
quotas/quota_network:
value: '-1'
controller_classes:
- ::neutron::config
Pushing hieradata to specific roles only (as of Red Hat OpenStack Platform 10)
This is outlined in https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html-single/advanced_overcloud_customization/#chap-Configuration_Hooks, particularly in https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html-single/advanced_overcloud_customization/#sect-Customizing_Puppet_Configuration_Data
[ROLE]ExtraConfig
Configuration to add to a composable role. Replace [ROLE] with the composable role name.
For example, if the role is called NovaCustomRole, then ExtraConfig can be pushed via:
parameter_defaults:
NovaCustomRoleExtraConfig:
nova::compute::vcpu_pin_set: ['4-12','^8']
Pushing hieradata to specific nodes only
Red Hat OpenStack Platform 10 introduces the roles concept to create distinct roles, e.g. for compute nodes. This allows the creation of a standard compute role and a high performance compute role with CPU pinning. In version prior to OSP 10, this is not possible, but one can emulate this behavior by using NodeDataLookup. This will allow to push specific puppet hiera data to individual nodes.
This approach is outlined in the upstream documentation:
Content from docs.openstack.org is not included.Content from docs.openstack.org is not included.https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_specific_hieradata.html
Obtaining the node UUID
From the upstream documentation: Content from docs.openstack.org is not included.Content from docs.openstack.org is not included.https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/introspection_data.html
"Every introspection run (as described in Basic Deployment (CLI)) collects a lot of facts about the hardware and puts them as JSON in Swift."
In OpenStack Platform 9 and above, this data can be retrieved via:
openstack baremetal introspection data save <UUID> | jq .extra.system.product.uuid
In OpenStack Platform 8, this data can be retrieved with:
token=$(openstack token issue -f value -c id)
curl -H "X-Auth-Token: $token" http://127.0.0.1:5050/v1/introspection/<UUID>/data | jq .extra.system.product.uuid
First, determine the ironic UUID:
[stack@undercloud-2 ~]$ ironic node-list | grep overcloud-node5
| d3717f77-38db-4d08-a4dc-d87d4552419d | overcloud-node5 | c59a1a6f-b374-4407-ade3-df4e8cb76314 | power on | active | False |
In this example, d3717f77-38db-4d08-a4dc-d87d4552419d is the ironic UUID of a compute node.
In OSP 8, run:
token=$(openstack token issue -f value -c id)
[stack@undercloud-2 ~]$ curl -H "X-Auth-Token: $token" http://127.0.0.1:5050/v1/introspection/d3717f77-38db-4d08-a4dc-d87d4552419d/data | jq .extra.system.product.uuid
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 37857 100 37857 0 0 150k 0 --:--:-- --:--:-- --:--:-- 150k
"3E524BCA-9ECB-46D5-AA98-2A91818D1F43"
In OSP 9 and later, run:
openstack baremetal introspection data save 9dcc87ae-4c6d-4ede-81a5-9b20d7dc4a14 | jq .extra.system.product.uuid
"75B65731-9204-4877-8F63-40D83AC2DACE"
Configuring node specific hieradata
This example pushes specific hieradata into compute nodes. In this case, create file nodedatalookup.yaml with the following contents:
resource_registry:
OS::TripleO::ComputeExtraConfigPre: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/pre_deploy/per_node.yaml
parameter_defaults:
NodeDataLookup: '{"769594FC-8A71-4EBF-9A48-DC83E56763F6": {"nova::compute::vcpu_pin_set": [ "2", "3" ]}}'
Note: the
vcpu_pin_setparameter as an argument to thenova::computeclass was only introduced with OpenStack Platform 9.
Register the environment file during the deployment
openstack overcloud deploy (...) \
-e ${template_base_dir}/nodedatalookup.yaml \
(...)
More complex example
Get all compute node UUIDs. In this example, 2 compute nodes will be provisioned:
[stack@undercloud-6 templates]$ ironic node-list | awk '{print $2}' | while read uuid;do if `ironic node-show $uuid | grep -q compute`;then echo $uuid;fi;done
(...)
9dcc87ae-4c6d-4ede-81a5-9b20d7dc4a14
599d8a7b-20dc-4fbb-a330-9adcfc239133
(...)
Get the device UUID (from dmidecode):
[stack@undercloud-6 templates]$ openstack baremetal introspection data save 9dcc87ae-4c6d-4ede-81a5-9b20d7dc4a14 | jq .extra.system.product.uuid
"75B65731-9204-4877-8F63-40D83AC2DACE"
[stack@undercloud-6 templates]$ openstack baremetal introspection data save 599d8a7b-20dc-4fbb-a330-9adcfc239133 | jq .extra.system.product.uuid
"8049FCE9-2908-4FAC-80ED-3FFAC74DEB99"
Create file /home/stack/templates/compute_vcpu_pinning.yaml
resource_registry:
OS::TripleO::ComputeExtraConfigPre: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/pre_deploy/per_node.yaml
parameter_defaults:
# > implies a multiline string. Indentations will *not* appear in the final output
NodeDataLookup: >
{
"75B65731-9204-4877-8F63-40D83AC2DACE": {"nova::compute::vcpu_pin_set": [ "2-3" ], "nova::compute::reserved_host_memory": "2048"},
"8049FCE9-2908-4FAC-80ED-3FFAC74DEB99": {"nova::compute::vcpu_pin_set": [ "1-2" ], "nova::compute::reserved_host_memory": "1024"}
}
Add /home/stack/templates/compute_vcpu_pinning.yaml to the deploy script, e.g.:
[stack@undercloud-6 ~]$ cat templates/deploy.sh
#!/bin/bash
if [ $PWD != /home/stack ] ; then echo "USAGE: $0 this script needs to be executed in /home/stack"; exit 1 ; fi
# deploy.sh <control_scale compute_scale ceph_scale>
control_scale=1
compute_scale=2
ceph_scale=0
echo "control_scale=$control_scale, compute_scale=$compute_scale, ceph_scale=$ceph_scale"
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
template_base_dir="$DIR"
ntpserver=10.5.26.10 #RH LAB
openstack overcloud deploy --templates \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e ${template_base_dir}/network-environment.yaml \
-e ${template_base_dir}/compute_vcpu_pinning.yaml \
--control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage \
--control-scale $control_scale --compute-scale $compute_scale --ceph-storage-scale $ceph_scale \
--ntp-server $ntpserver \
--neutron-network-type vxlan --neutron-tunnel-types vxlan
Verification:
[stack@undercloud-6 ~]$ ssh heat-admin@192.0.2.19 sudo grep pin_set /etc/nova/nova.conf
# vcpu_pin_set = "4-12,^8,15"
vcpu_pin_set = 2-3
#vcpu_pin_set=<None>
[stack@undercloud-6 ~]$ ssh heat-admin@192.0.2.21 sudo grep pin_set /etc/nova/nova.conf
# vcpu_pin_set = "4-12,^8,15"
vcpu_pin_set = 1-2
#vcpu_pin_set=<None>
[stack@undercloud-6 ~]$ ssh heat-admin@192.0.2.19 sudo grep reserved_host_mem /etc/nova/nova.conf
#reserved_host_memory_mb=512
reserved_host_memory_mb=2048
[stack@undercloud-6 ~]$ ssh heat-admin@192.0.2.21 sudo grep reserved_host_mem /etc/nova/nova.conf
#reserved_host_memory_mb=512
reserved_host_memory_mb=1024
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.