Skip to main content

Creating a RHEL cluster with Virtual IP using CMAN and Pacemaker.

Creating a two-node RHEL cluster with Virtual IP using CMAN and Pacemaker.

Important Links :
http://blog.mattbrock.co.uk/creating-a-two-node-centos-6-cluster-with-floating-ip-using-cman-and-pacemaker/
http://clusterlabs.org/quickstart-redhat-6.html

Configuring Repo on RHEL 6.6

[root@waepprrkhe001 ~]# cat /etc/yum.repos.d/centos.repo
[centos-6-base]
name=CentOS-$releasever - Base
mirrorlist=http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=os
baseurl=http://mirror.centos.org/centos/6/os/x86_64/
enabled=1
gpgkey=http://mirror.centos.org/centos/6/os/x86_64/RPM-GPG-KEY-CentOS-6
[root@waepprrkhe001 ~]#

Installation and initial configuration

Install the required packages on both machines:
yum install pacemaker cman pcs ccs resource-agents
Set up and configure the cluster on the primary machine, changing vipcluster, primary.server.com and secondary.server.com as needed:
ccs -f /etc/cluster/cluster.conf --createcluster vipcluster
ccs -f /etc/cluster/cluster.conf --addnode primary.server.com
ccs -f /etc/cluster/cluster.conf --addnode secondary.server.com
ccs -f /etc/cluster/cluster.conf --addfencedev pcmk agent=fence_pcmk
ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect primary.server.com
ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect secondary.server.com
ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk primary.server.com pcmk-redirect port=primary.server.com
ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk secondary.server.com pcmk-redirect port=secondary.server.com
Copy /etc/cluster/cluster.conf from the primary server to secondary server in cluster.
It’s necessary to turn off quorum checking, so do this on both machines:
echo "CMAN_QUORUM_TIMEOUT=0" >> /etc/sysconfig/cman

Start the services

Start up the services on both servers.
service cman start
service pacemaker start
Make sure both services can be reboot:
chkconfig cman on
chkconfig pacemaker on

Configure and create floating IP

Configure the cluster on the primary server.
pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore
Create the Virtual IP on the primary server. This VIP will be assigned between the 2 servers.
If primary goes down, then this ip is assigned to the secondary server.
pcs resource create vipbalancerip ocf:heartbeat:IPaddr2 ip=192.168.0.100 cidr_netmask=32 op monitor interval=30s
pcs constraint location vipbalancerip prefers primary.server.com=INFINITY

Cluster administration

To monitor the status of the cluster:
pcs status
Here is the output from primary
[root@waepprrkhe001 ~]# pcs status
Cluster name: vipcluster
Last updated: Mon Sep 28 20:53:57 2015
Last change: Mon Sep 28 19:52:47 2015
Stack: cman
Current DC: primary.server.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured
1 Resources configured


Online: [ primary.server.com secondary.server.com ]

Full list of resources:

 livefrontendIP0        (ocf::heartbeat:IPaddr2):       Started primary.server.com

[root@waepprrkhe001 ~]#    
To show the full cluster configuration:
pcs config
Here is the output from primary
[root@waepprrkhe001 ~]# pcs config
Cluster Name: vipcluster
Corosync Nodes:
 primary.server.com secondary.server.com
Pacemaker Nodes:
 primary.server.com secondary.server.com

Resources:
 Resource: livefrontendIP0 (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: ip=192.168.0.100 cidr_netmask=32
  Operations: start interval=0s timeout=20s (livefrontendIP0-start-interval-0s)
              stop interval=0s timeout=20s (livefrontendIP0-stop-interval-0s)
              monitor interval=30s (livefrontendIP0-monitor-interval-30s)

Stonith Devices:
Fencing Levels:

Location Constraints:
  Resource: livefrontendIP0
    Enabled on: primary.server.com (score:INFINITY) (id:location-livefrontendIP0-primary.server.com-INFINITY)
Ordering Constraints:
Colocation Constraints:

Resources Defaults:
 No defaults set
Operations Defaults:
 No defaults set

Cluster Properties:
 cluster-infrastructure: cman
 dc-version: 1.1.11-97629de
 no-quorum-policy: ignore
 stonith-enabled: false
[root@waepprrkhe001 ~]#

Failover testing.

Shutdown secondary server.
[root@waepprrkhe001 ~]# pcs status
Cluster name: vipcluster
Last updated: Mon Sep 28 20:08:00 2015
Last change: Mon Sep 28 19:52:47 2015
Stack: cman
Current DC: primary.server.com - partition WITHOUT quorum
Version: 1.1.11-97629de
2 Nodes configured
1 Resources configured


Online: [ primary.server.com ]
OFFLINE: [ secondary.server.com ]

Full list of resources:

 livefrontendIP0        (ocf::heartbeat:IPaddr2):       Started primary.server.com
Shutdown primary server.
[root@waepprrkhe002 ~]# pcs status
Cluster name: vipcluster
Last updated: Mon Sep 28 20:05:30 2015
Last change: Mon Sep 28 19:52:47 2015
Stack: cman
Current DC: secondary.server.com - partition WITHOUT quorum
Version: 1.1.11-97629de
2 Nodes configured
1 Resources configured


Online: [ secondary.server.com ]
OFFLINE: [ primary.server.com ]

Full list of resources:

 livefrontendIP0        (ocf::heartbeat:IPaddr2):       Started secondary.server.com

Comments

Popular posts from this blog

Cloudera Manager - Duplicate entry 'zookeeper' for key 'NAME'.

We had recently built a cluster using cloudera API’s and had all the services running on it with Kerberos enabled. Next we had a requirement to add another kafka cluster to our already exsisting cluster in cloudera manager. Since it is a quick task to get the zookeeper and kafka up and running. We decided to get this done using the cloudera manager instead of the API’s. But we faced the Duplicate entry 'zookeeper' for key 'NAME' issue as described in the bug below. https://issues.cloudera.org/browse/DISTRO-790 I have set up two clusters that share a Cloudera Manger. The first I set up with the API and created the services with capital letter names, e.g., ZOOKEEPER, HDFS, HIVE. Now, I add the second cluster using the Wizard. Add Cluster->Select Hosts->Distribute Parcels->Select base HDFS Cluster install On the next page i get SQL errros telling that the services i want to add already exist. I suspect that the check for existing service names does n

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server &quo

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa