Clustered Samba Configuration for Red Hat Enterprise Linux 7
After installing the cluster software and GFS2 and LVM packages, start the cluster software and create the cluster. You must configure fencing for the cluster. Once you have done this, perform the following procedure.
- Set the global Pacemaker parameter no_quorum_policy to freeze.
Note:
By default, the value of no-quorum-policy is set to stop, indicating that once quorum is lost, and all the resources on the remaining partition will immediately be stopped. Typically this default setting is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself can not be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost.
To address this situation, you can set no-quorum-policy=freeze when GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained.
~]# pcs property set no-quorum-policy=freeze
- Set up dlm as a required dependency for clvmd and GFS2.
~]# pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
- After verifying that locking_type=3 in the lvm.conf file in each node of the cluster, set up clvmd as a cluster resource.
~]# pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true
- Set up clvmd and dlm depedency and start up order. clvmd must start after dlm and must run on the same node as dlm.
~]# pcs constraint order start dlm-clone then clvmd-clone
~]# pcs constraint colocation add clvmd-clone with dlm-clone
- Create the clustered LVs and format the volumes with GFS2 filesystems. Ensure that you create enough journals for each of the nodes in your cluster and that the correct cluster name is used with the -t option to mkfs.gfs2. We have used csmb as the cluster name in this example:
~]# pvcreate /dev/vdb
~]# vgcreate -Ay -cy csmb_vg /dev/vdb
Create the volume for CTDB's internal locking:
~]# lvcreate -L1G -n ctdb_lv csmb_vg
~]# mkfs.gfs2 -j3 -p lock_dlm -t csmb:ctdb /dev/csmb_vg/ctdb_lv
Create one or more GFS2 file systems that will be used to share over Samba:
~]# lvcreate -L50G -n csmb_lv1 csmb_vg
~]# mkfs.gfs2 -j3 -p lock_dlm -t csmb:csmb1 /dev/csmb_vg/csmb_lv1
~]# lvcreate -L100G -n csmb_lv2 csmb_vg
~]# mkfs.gfs2 -j3 -p lock_dlm -t csmb:csmb2 /dev/csmb_vg/csmb_lv2
- Configure these GFS2 filesystems as Filesytem resources.
You should not add these file systems to the /etc/fstab file because they will be managed as Pacemaker cluster resources. Mount options can be specified as part of the resource configuration with options=options. Run the pcs resource describe Filesystem command for full configuration options.
~]# pcs resource create ctdb_fs Filesystem device="/dev/csmb_vg/ctdb_lv" directory="/mnt/ctdb" fstype="gfs2" op monitor interval=10s on-fail=fence clone interleave=true
~]# pcs resource create csmb_fs1 Filesystem device="/dev/csmb_vg/csmb_lv1" directory="/mnt/share1" fstype="gfs2" op monitor interval=10s on-fail=fence clone interleave=true
~]# pcs resource create csmb_fs2 Filesystem device="/dev/csmb_vg/csmb_lv2" directory="/mnt/share2" fstype="gfs2" op monitor interval=10s on-fail=fence clone interleave=true
- Verify that the GFS2 filesystems are mounted as expected.
~]# mount | grep gfs2
/dev/mapper/csmb_vg-ctdb_lv on /mnt/ctdb type gfs2 (rw,seclabel)
/dev/mapper/csmb_vg-csmb_lv1 on /mnt/share1 type gfs2 (rw,seclabel)
/dev/mapper/csmb_vg-csmb_lv2 on /mnt/share2 type gfs2 (rw,seclabel)
- Set up GFS2 and clvmd dependency and startup order. GFS2 must start after clvmd and must run on the same node as clvmd.
~]# pcs constraint order start clvmd-clone then ctdb_fs-clone
~]# pcs constraint colocation add ctdb_fs-clone with clvmd-clone
~]# pcs constraint order start clvmd-clone then csmb_fs1-clone
~]# pcs constraint colocation add csmb_fs1-clone with clvmd-clone
~]# pcs constraint order start clvmd-clone then csmb_fs2-clone
~]# pcs constraint colocation add csmb_fs2-clone with clvmd-clone
- CTDB configuration
The CTDB config file is located at /etc/sysconfig/ctdb. Four mandatory fields for ctdb operation that need to be configured are:
CTDB_NODES=/etc/ctdb/nodes
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_RECOVERY_LOCK="/mnt/ctdb/.ctdb.lock"
CTDB_MANAGES_SAMBA=yes
CTDB_MANAGES_WINBIND=yes
CTDB_NODES specifies the location of the file which contains the cluster nodes list. In our example, the contents of the /etc/ctdb/nodes file are:
192.168.1.151
192.168.1.152
192.168.1.153
This simply lists the cluster nodes' IP addresses. In this example, we assume that there is only one interface/IP on each node that is used for both cluster/CTDB communication and serving clients. If you have two interfaces on each node and wish to dedicate one set of interfaces for cluster/CTDB communication, use those IP addresses here and make sure the hostnames/IP addresses used in the cluster configuration are the same.
It is critical that this file be identical on all nodes because the ordering is important and CTDB will fail if it finds different information on different nodes.
CTDB_PUBLIC_ADDRESSES specifies the location of the file that contains a list of IP addresses that can be used to export the Samba shares of this cluster. The contents of the /etc/ctdb/public_addresses file in our example are:
192.168.1.201/0 eth0
192.168.1.202/0 eth0
192.168.1.203/0 eth0
We are using three addresses in our example above which are currently unused on the network. Please choose addresses that can be accessed by the intended clients.
These are the IP addresses that you should configure in DNS for the name of the clustered Samba server and are the addresses that CIFS clients will connect to. By using different public_addresses files on different nodes, it is possible to partition the cluster into subsets of nodes.
CTDB_RECOVERY_LOCK specifies a lock file that CTDB uses internally for recovery and this file must reside on shared storage such that all the cluster nodes have access to it. In this example, we've used the GFS2 filesystem that will be mounted at /mnt/ctdb on all nodes. This is the ctdb_lv we set up earlier in steps 5 and 6. This filesystem is different from the GFS2 filesystem(s) that will host the Samba share(s) that we plan to export. This recovery lock file is used to prevent split-brain scenarios. With newer versions of CTDB (>= 1.0.112), specifying this file is optional as long as it is substituted with another split-brain prevention mechanism.
CTDB_MANAGES_SAMBA=yes. Enabling this allows CTDB to start and stop the Samba service as it deems necessary to provide service migration/failover, etc.
CTDB_MANAGES_WINBIND=yes. If running on a member server, you will need to set this too.
For more information on CTDB configuration, look at: Content from ctdb.samba.org is not included.Content from ctdb.samba.org is not included.http://ctdb.samba.org/configuring.html
- Samba configuration
Detailed Samba configuration is outside the scope of this document but is easily available from several other sources. We use a simplistic configuration in our example and explain certain clustering-specific details.
The Samba Configuration file located at /etc/samba/smb.conf in our example looks like this:
[global]
guest ok = yes
clustering = yes
netbios name = csmb-server
[csmb1]
comment = Clustered Samba Share1
public = yes
path = /mnt/share1
writeable = yes
[csmb2]
comment = Clustered Samba Share2
public = yes
path = /mnt/share2
writeable = yes
We export two shares with names csmb1 and csmb2 located at /mnt/share1 and /mnt/share2 respectively. These are the two GFS2 files ystems we set up earlier (in steps 5 and 6) to host Samba shares and are distinct from the third smaller CTDB-specific GFS2 filesystem used for the CTDB lock file at /mnt/ctdb/.ctdb.lock.
Ensure that the share paths (in our case /mnt/share1 and /mnt/share2) have the required permissions to be exported and accessed by the intended users.
The clustering=yes option instructs Samba to work with CTDB.
netbios name = csmb-server explicitly sets all the nodes to have a common NetBIOS name.
The smb.conf file should be identical on all the cluster nodes
Set the Samba password for the users who are allowed to access the shares. In our example, we add the Linux user testuser to the Samba password database on all nodes.
~]# smbpasswd -a testuser
Set permissions on the shares so they can be accessed via Samba
~]# chmod 777 /mnt/share1 /mnt/share2
- Bringing up CTDB/Samba
Ensure that the underlying cluster stack is up and running. systemctl start ctdb on all the nodes will bring up the ctdbd daemon and startup CTDB. Currently, it can take upto a minute for CTDB to join all the cluster nodes into membership and launch Samba. ctdb status will show you how CTDB is doing:
~]# ctdb status
Number of nodes:3
pnn:0 192.168.1.151 OK (THIS NODE)
pnn:1 192.168.1.152 OK
pnn:2 192.168.1.153 OK
Generation:1410259202
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:0
Since we configured ctdb with CTDB_MANAGES_SAMBA=yes, CTDB will also start up the Samba service on all nodes and export all configured Samba shares.
When you see that all nodes are OK, it's safe to move on to the next step.
- Using the Clustered Samba server
Clients can connect to the Samba share(s) that we just exported by connecting to one of the IP addresses specified in the /etc/ctdb/public_addresses file.
Example:
mount -t cifs //192.168.1.201/csmb1 /mnt/sambashare -o user=testuser,password=password
or
smbclient //192.168.1.201/csmb1
Alternate Configuration for setting up CTDB/Samba with an external Active Directory Domain Server:
This alternate setup configures CTDB/Samba to join an existing Windows Domain as a member server by providing the details of the Domain Controller server. In this example, the Domain Controller is also the Active Directory server. The first step is to configure and start up the Active Directory server and add the users that you will use with the clustered Samba server. Configuring Active Directory (AD) Domain Services (DS) is out of the scope of this document.
After making sure that CTDB_MANAGES_WINBIND is enabled in /etc/sysconfig/ctdb, /etc/samba/smb.conf needs a few additions. The modified smb.conf file from our example above looks as follows:
[global]
guest ok = yes
clustering = yes
netbios name = csmb-server
workgroup = LAB
ea support = yes
security = ads
realm = LAB.MSP.REDHAT.COM
password server = win2008s-x64-1
encrypt passwords = yes
winbind uid = 10000-20000
winbind gid = 10000-20000
winbind enum users = yes
winbind enum groups = yes
winbind use default domain = yes
[csmb1]
comment = Clustered Samba Share1
public = yes
path = /mnt/share1
writeable = yes
[csmb2]
comment = Clustered Samba Share2
public = yes
path = /mnt/share2
writeable = yes
Here, security = ads means that Samba should use its Active Directory security mode. The realm parameter is the Windows domain that is controlled by the DC. Our password server is the hostname of the Windows 2008 server that runs AD. In our case, this server is also the DC.
winbind uid and winbind gid are free uid/gid ranges on the Linux servers that Samba can use as proxy IDs for the users in the AD.
Next step is to configure Kerberos. The /etc/krb5.conf for our example looks like this:
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = LAB.MSP.REDHAT.COM
ticket_lifetime = 24h
[realms]
LAB.MSP.REDHAT.COM = {
kdc = win2008s-x64-1.lab.msp.redhat.com
admin_server = win2008s-x64-1.lab.msp.redhat.com
default_domain = lab.msp.redhat.com
}
[domain_realm]
.lab.msp.redhat.com = LAB.MSP.REDHAT.COM
lab.msp.redhat.com = LAB.MSP.REDHAT.COM
[kdc]
profile = /var/kerberos/krb5kdc/kdc.conf
[appdefaults]
pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false
}
Configure /etc/nsswitch.conf to add winbind to passwd and group lookup
passwd: files winbind
shadow: files
group: files winbind
After configuring all these bits, bringing up the cluster and CTDB as described in step 11 above should connect Samba/CTDB to the Active Directory and you should be able to access the Samba share from the Windows Domain users. If things do not work as expected, you can shut down CTDB and attempt to debug by hand using kinit <domain administrator@domain.com> to see if the AD server gives you a valid Kerberos ticket (using the klist command). Following this by net ads join LAB.MSP.REDHAT.COM should allow the computer to connect to the Active Directory. systemctl start ctdb will do the same thing as well.