1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

HAST and ZFS with CARP failover

Discussion in 'Howtos and FAQs (Moderated)' started by gkontos, Feb 8, 2012.

  1. gkontos

    gkontos Active Member

    Messages:
    1,395
    Thanks Received:
    246
    HAST (Highly Available Storage) is a new concept for FreeBSD and it is under constant development. HAST allows to transparently store data on two physically separated machines connected over the TCP/IP network. HAST operates on block level making it transparent for file systems, providing disk-like devices in /dev/hast directory.

    In this article we will create two identical HAST nodes, hast1 and hast2. Both devices will use one NIC connected to a vlan for data synchronization and another NIC will be configured via CARP in order to share the same IP address across the network. The first node will be called storage1.hast.test, the second storage2.hast.test and they will both listen to a common IP address which we will bind to storage.hast.test

    HAST binds its resource names according to the machine's hostname. Therefore, we will use hast1.freebsd.loc and hast2.freebsd.loc as the machines hostnames so that HAST can operate without complaining.

    For starters, lets set up two identical nodes. For this example I have installed FreeBSD 9.0-RELEASE on two deferent instances using a Linux KVM. Both nodes have 512MB of RAM, one SATA drive containing the OS and three SATA drives which will be used to create our shared Raidz1 pool.

    In order for carp to work we don't have to compile a new kernel. We can just load it as a module by adding to /boot/loader.conf

    Code:
     if_carp_load="YES"


    Our both nodes are set up, it is time to make some adjustments. First a descent /etc/rc.conf for the first node:

    Code:
    zfs_enable="YES"
    
    ###Primary Interface##
    ifconfig_re0="inet 10.10.10.181  netmask 255.255.255.0"
    
    ###Secondary Interface for HAST###
    ifconfig_re1="inet 192.168.100.100  netmask 255.255.255.0"
    
    defaultrouter="10.10.10.1"
    sshd_enable="YES"
    hostname="hast1.freebsd.loc"
    
    ##CARP INTERFACE SETUP##
    cloned_interfaces="carp0"
    ifconfig_carp0="inet 10.10.10.180 netmask 255.255.255.0 vhid 1 pass mypassword advskew 0"
    
    hastd_enable=YES


    The second node we will also much the first except for the IP addressing:

    Code:
    zfs_enable="YES"
    
    ###Primary Interface##
    ifconfig_re0="inet 10.10.10.182  netmask 255.255.255.0"
    
    ###Secondary Interface for HAST###
    ifconfig_re1="inet 192.168.100.101  netmask 255.255.255.0"
    
    defaultrouter="10.10.10.1"
    sshd_enable="YES"
    hostname="hast2.freebsd.loc"
    
    ##CARP INTERFACE SETUP##
    cloned_interfaces="carp0"
    ifconfig_carp0="inet 10.10.10.180 netmask 255.255.255.0 vhid 1 pass mypassword advskew 0"
    
    hastd_enable=YES


    At this point we have assigned re1 with two IPs for HAST synchronization. We have also assigned two IPs to re0 which in turn we share with a third common IP assigned to carp0.
    As a result, re1 is being used for HAST synchronization in a vlan while carp0 which is cloned by re0 used under the same vlan with the rest of our clients.

    In order for HAST to function correctly we have to resolve the correct IPs on every node. We don't want to rely on DNS for this because DNS can fail. Instead we will use /etc/hosts same on every node.

    Code:
    ::1			localhost localhost.freebsd.loc
    127.0.0.1		localhost localhost.freebsd.loc
    192.168.100.100		hast1.freebsd.loc hast1
    192.168.100.101		hast2.freebsd.loc hast2
    
    10.10.10.181          	storage1.hast.test storage1
    10.10.10.182          	storage2.hast.test storage2
    10.10.10.180	      	storage.hast.test  storage
    


    Next, we have to create the /etc/hast.conf file. Here we will declare the resources that we want to create. All resources will eventually create devices located under /dev/hast on the primary node. Every resource indicates a physical device specifying a local and remote IP device. The /etc/hast.conf must be exactly the same on every node.

    Code:
    resource disk1 {
            on hast1 {
                    local /dev/ad1
                    remote hast2
            }
            on  hast2 {
                    local /dev/ad1
                    remote hast1
            }
    }
    
    resource disk2 {
            on  hast1 {
                    local /dev/ad2
                    remote hast2
            }
            on  hast2 {
                    local /dev/ad2
                    remote hast1
            }
    }
    
    resource disk3 {
            on  hast1 {
                    local /dev/ad3
                    remote hast2
            }
            on  hast2 {
                    local /dev/ad3
                    remote hast1
            }
    }


    In this example we are sharing three resources, disk1, disk2 and disk3. Each resource indicates a device the local and the remote IP address. With this configuration in place, we are ready to begin setting up out HAST devices.

    Lets start hastd on both nodes first:

    Code:
    hast1#/etc/rc.d/hastd start

    Code:
    hast2#/etc/rc.d/hastd start


    Now on the primary node we will initialize our resources, create them and finally assign a primary role:

    Code:
    hast1#hastctl role init disk1
    hast1#hastctl role init disk2
    hast1#hastctl role init disk3
    hast1#hastctl create disk1
    hast1#hastctl create disk2
    hast1#hastctl create disk3
    hast1#hastctl role primary disk1
    hast1#hastctl role primary disk2
    hast1#hastctl role primary disk3


    Next, on the secondary node we will initialize our resources, create them and finally assign a secondary role:

    Code:
    hast2#hastctl role init disk1
    hast2#hastctl role init disk2
    hast2#hastctl role init disk3
    hast2#hastctl create disk1
    hast2#hastctl create disk2
    hast2#hastctl create disk3
    hast2#hastctl role secondary disk1
    hast2#hastctl role secondary disk2
    hast2#hastctl role secondary disk3


    There are other ways for creating and assigning roles to each resource. Having repeat this procedure a few times, I saw that this usually always works.

    Now check the status on both nodes:

    Code:
    hast1# hastctl status
    disk1:
      role: primary
      provname: disk1
      localpath: /dev/ada1
      ...
      remoteaddr: hast2
      replication: fullsync
      status: complete
      dirty: 0 (0B)
      ...
    disk2:
      role: primary
      provname: disk2
      localpath: /dev/ada2
      ...
      remoteaddr: hast2
      replication: fullsync
      status: complete
      dirty: 0 (0B)
      ...
    disk3:
      role: primary
      provname: disk3
      localpath: /dev/ada3
      ...
      remoteaddr: hast2
      replication: fullsync
      status: complete
      dirty: 0 (0B)
      ...


    The first node looks good. Status is complete.

    Code:
    hast2# hastctl status
    disk1:
      role: secondary
      provname: disk1
      localpath: /dev/ada1
      ...
      remoteaddr: hast1
      replication: fullsync
      status: complete
      dirty: 0 (0B)
      ...
    disk2:
      role: secondary
      provname: disk2
      localpath: /dev/ada2
      ...
      remoteaddr: hast1
      replication: fullsync
      status: complete
      dirty: 0 (0B)
      ...
    disk3:
      role: secondary
      provname: disk3
      localpath: /dev/ada3
      ...
      remoteaddr: hast1
      replication: fullsync
      status: complete
      dirty: 0 (0B)
      ...


    So does the second. Like I mentioned earlier there are different ways for doing this the first time. You have to look for the word
    status: complete. If you get a degraded status you can always repeat the procedure.

    Now it is time to create our ZFS pool. The primary node should have a /dev/hast directory containing our resources. This directory appears only at the active node.

    Code:
    
    hast1# zpool create zhast raidz1 /dev/hast/disk1 /dev/hast/disk2 /dev/hast/disk3
    hast1# zpool status zhast
     pool: zhast
     state: ONLINE
     scan: none requested
     config:
    
    	NAME            STATE     READ WRITE CKSUM
    	zhast           ONLINE       0     0     0
    	  raidz1-0      ONLINE       0     0     0
    	    hast/disk1  ONLINE       0     0     0
    	    hast/disk2  ONLINE       0     0     0
    	    hast/disk3  ONLINE       0     0     0
    


    We can now use hastctl status on each node to see if everything looks ok. The magic word we are looking for here is:
    replication: fullsync

    At this point both of our nodes should be available for failover. We have storage1 running as primary and sharing a pool called zhast. Our storage2 is currently in a standby mode. If we have set DNS properly we can ssh to storage.hast.test or by using its carp IP to 10.10.10.180.
     
    Jov, zennybsd, HarryE and 2 others thank for this.
  2. gkontos

    gkontos Active Member

    Messages:
    1,395
    Thanks Received:
    246
    HAST and ZFS with CARP failover (Part2)

    In order to perform a failover we have to first export our pool from the first node, change the role of each resource to secondary. Then change the role of each resource to primary on the standby node and import the pool. This procedure will be done manually to test if failover really works. But for a real HA solution we will eventually create a script that will take care of this.

    First lets export our pool and change our resources role:

    Code:
    hast1# zpool export zhast
    hast1# hastctl role secondary disk1
    hast1# hastctl role secondary disk2
    hast1# hastctl role secondary disk3
    


    Now, lets reverse the procedure on the standby node:

    Code:
    hast2# hastctl role primary disk1
    hast2# hastctl role primary disk2
    hast2# hastctl role primary disk3
    hast2# zpool import zhast
    


    The roles have successfully changed, lets see our pool status:

    Code:
    hast2# zpool status zhast
     pool: zhast
     state: ONLINE
     scan: none requested
     config:
    
    	NAME            STATE     READ WRITE CKSUM
    	zhast           ONLINE       0     0     0
    	  raidz1-0      ONLINE       0     0     0
    	    hast/disk1  ONLINE       0     0     0
    	    hast/disk2  ONLINE       0     0     0
    	    hast/disk3  ONLINE       0     0     0
    
    errors: No known data errors
    


    Again, by using hastctl status on each node we can verify that the roles have indeed changed and that the status is complete. This is a sample output from the second node now in charge:

    Code:
    hast2# hastctl status
    disk1:
      role: primary
      provname: disk1
      localpath: /dev/ad1
      ...
      remoteaddr: hast1
      replication: fullsync
      status: complete
      ...
    disk2:
      role: primary
      provname: disk2
      localpath: /dev/ad2
      ...
      remoteaddr: hast1
      replication: fullsync
      status: complete
      ...
    disk3:
      role: primary
      provname: disk3
      localpath: /dev/ad3
      ...
      remoteaddr: hast1
      replication: fullsync
      status: complete
      ...
    


    It is now time to automate this procedure. When do we want our servers to automatically failover?
    One reason would be if the primary node is not responding to the external network thus not being able to serve its clients. Using a devd event we can catch a carp interface going up or down and a state change.

    Add the following lines to /etc/devd.conf on both nodes:

    Code:
    notify 30 {
    	match "system" "IFNET";
    	match "subsystem" "carp0";
    	match "type" "LINK_UP";
    	action "/usr/local/bin/failover master";
    };
    
    notify 30 {
    	match "system" "IFNET";
    	match "subsystem" "carp0";
    	match "type" "LINK_DOWN";
    	action "/usr/local/bin/failover slave";
    };
    


    And now lets create the failover script which will be responsible for doing automatically what we did before manually:

    Code:
    #!/bin/sh
    
    # Original script by Freddie Cash <fjwcash@gmail.com>
    # Modified by Michael W. Lucas <mwlucas@BlackHelicopters.org>
    # and Viktor Petersson <vpetersson@wireload.net>
    # Modified by George Kontostanos <gkontos.mail@gmail.com>
    
    # The names of the HAST resources, as listed in /etc/hast.conf
    resources="disk1 disk2 disk3"
    
    # delay in mounting HAST resource after becoming master
    # make your best guess
    delay=3
    
    # logging
    log="local0.debug"
    name="failover"
    pool="zhast"
    
    # end of user configurable stuff
    
    case "$1" in
    	master)
    		logger -p $log -t $name "Switching to primary provider for ${resources}."
    		sleep ${delay}
    
    		# Wait for any "hastd secondary" processes to stop
    		for disk in ${resources}; do
    			while $( pgrep -lf "hastd: ${disk} \(secondary\)" > /dev/null 2>&1 ); do
    				sleep 1
    			done
    
    			# Switch role for each disk
    			hastctl role primary ${disk}
    			if [ $? -ne 0 ]; then
    				logger -p $log -t $name "Unable to change role to primary for resource ${disk}."
    				exit 1
    			fi
    		done
    
    		# Wait for the /dev/hast/* devices to appear
    		for disk in ${resources}; do
    			for I in $( jot 60 ); do
    				[ -c "/dev/hast/${disk}" ] && break
    				sleep 0.5
    			done
    
    			if [ ! -c "/dev/hast/${disk}" ]; then
    				logger -p $log -t $name "GEOM provider /dev/hast/${disk} did not appear."
    				exit 1
    			fi
    		done
    
    		logger -p $log -t $name "Role for HAST resources ${resources} switched to primary."
    
    
    		logger -p $log -t $name "Importing Pool"
    		# Import ZFS pool. Do it forcibly as it remembers hostid of
                    # the other cluster node.
                    out=`zpool import -f "${pool}" 2>&1`
                    if [ $? -ne 0 ]; then
                        logger -p local0.error -t hast "ZFS pool import for resource ${resource} failed: ${out}."
                        exit 1
                    fi
                    logger -p local0.debug -t hast "ZFS pool for resource ${resource} imported."
    
    	;;
    
    	slave)
    		logger -p $log -t $name "Switching to secondary provider for ${resources}."
    
    		# Switch roles for the HAST resources
    		zpool list | egrep -q "^${pool} "
            	if [ $? -eq 0 ]; then
                    	# Forcibly export file pool.
                    	out=`zpool export -f "${pool}" 2>&1`
                   		 if [ $? -ne 0 ]; then
                            	logger -p local0.error -t hast "Unable to export pool for resource ${resource}: ${out}."
                            	exit 1
                    	 fi
                    	logger -p local0.debug -t hast "ZFS pool for resource ${resource} exported."
            	fi
    		for disk in ${resources}; do
    			sleep $delay
    			hastctl role secondary ${disk} 2>&1
    			if [ $? -ne 0 ]; then
    				logger -p $log -t $name "Unable to switch role to secondary for resource ${disk}."
    				exit 1
    			fi
    			logger -p $log -t $name "Role switched to secondary for resource ${disk}."
    		done
    	;;
    esac
    


    Let's try it and see if it works. Log into both the currently active and standby node. Make sure that you are on the active by issuing a hastctl status command. Then force a failover by bringing the interface which is associated with carp0 down.

    Code:
    hast1# ifconfig er0 down


    Watch at the generated messages:

    Code:
    hast1# tail -f /var/log/debug.log
    
    Feb  6 15:01:41 hast1 failover: Switching to secondary provider for disk1 disk2 disk3.
    Feb  6 15:01:49 hast1 hast: ZFS pool for resource  exported.
    Feb  6 15:01:52 hast1 failover: Role switched to secondary for resource disk1.
    Feb  6 15:01:55 hast1 failover: Role switched to secondary for resource disk2.
    Feb  6 15:01:58 hast1 failover: Role switched to secondary for resource disk3.
    


    Code:
    hast2# tail -f /var/log/debug.log
    
    Feb  6 15:02:15 hast2 failover: Switching to primary provider for disk1 disk2 disk3.
    Feb  6 15:02:19 hast2 failover: Role for HAST resources disk1 disk2 disk3 switched to primary.
    Feb  6 15:02:19 hast2 failover: Importing Pool
    Feb  6 15:02:52 hast2 hast: ZFS pool for resource  imported.
    


    Voila! The failover worked like a charm and now hast2 has assumed the primary role.


    Further considerations:​


    What we did today is a basic setup of two nodes sharing a raidz1 pool with automatic role failover in case of a failure that would result in a loss of a carp interface.

    Obviously, a similar devd event would be generated in case we loose a HAST replication interface. This is something that needs to be addressed similarly since losing that interface will leave us with no synchronization at all.

    Going further, we would have to add scripts that will bring up and down services during a failover.

    Original article: http://www.aisecure.net/2012/02/07/hast-freebsd-zfs-with-carp-failover/
    Resources: MICHAEL W LUCAS, The Freebsd Handbook
     
    Jov, Zare, zennybsd and 8 others thank for this.
  3. Sylhouette

    Sylhouette Member

    Messages:
    186
    Thanks Received:
    23
    Did you also test a sudden reboot of the master? If I do this, then I get in all kinds of trouble.

    Mainly because the CARP interface starts in master mode after a reboot and hence will execute the master script, even if it is not master. Then the trouble starts and you get a split brain scenario.

    Regards,
    Johan
     
  4. gkontos

    gkontos Active Member

    Messages:
    1,395
    Thanks Received:
    246
    I did some tests running net/samba36 with both machines sharing /zhast. File sharing service was enabled in both machines.

    The connection was established via the CARP IP. During the reboot of the master there was an obvious delay until the pool becomes available to the secondary machine but that was solved by a client reset.

    After the node came up, CARP did not assign the master role therefore I always had to perform a manual fail back.

    Which FreeBSD version are you using?
    Do you by any chance have net.inet.carp.preempt=1 in your /etc/sysctl.conf?
     
  5. phoenix

    phoenix Moderator Staff Member Moderator

    Messages:
    3,446
    Thanks Received:
    769
    This is a known bug with CARP and is being worked on. The interim fix is to not enable the preempt sysctl for CARP.
     
  6. gkontos

    gkontos Active Member

    Messages:
    1,395
    Thanks Received:
    246
  7. johnd

    johnd New Member

    Messages:
    43
    Thanks Received:
    13
    Great work!
    There is a little typo, but it doesn´t affect the script. Look for ${resource} which should be ${resources}.
     
  8. balboah

    balboah New Member

    Messages:
    5
    Thanks Received:
    0
    hast or not hast

    I've been using a version of this guide to set up my own replication testing in two Xen guests. Having disk1 and disk2 set up in HAST I've created a pool with mirror devices. This works most of the time, and all of the time if everything is shut down cleanly.

    But for testing I've also tried resetting the HAST master in the middle of writing to a new file, which can get me into troubles. Once the ZFS metadata got corrupted which meant it rolled back a couple of minutes after forcing import with zfs import -F.

    Another time it completely locks up on zfs import with state tx->tx. Rendering all zfs tools unusable since they all lock up and wait for this import. The same thing happens on both machines even after reboots etc.

    So I'm currently wondering if this method really is reliable enough or if I should go the snapshot sync route without HAST.
     
  9. gkontos

    gkontos Active Member

    Messages:
    1,395
    Thanks Received:
    246
    If you forcibly export a pool during heavy I/O operations then you will eventually end up with corrupted metadata.

    This means that you should never initiate a manual failover during I/O operations.

    What happens though if the primary node crashes?

    The secondary node will try to import the pool and most probably it will unless a heave corruption has occurred. In that case you can use different import techniques and heal the pool.
     
  10. balboah

    balboah New Member

    Messages:
    5
    Thanks Received:
    0
    In my recent tests I've simply done ifconfig down or hard reset while a client is copying files to it via NFS.

    More than once I've gotten metadata corruption and errors which when trying zfs import -F tells me to restore the pool from a backup and refuses to import it.

    Seems a bit sketchy to me as the whole point of doing this in my case is to have a reliable backup machine in case the primary burns up. Also to stall NFS clients until the secondary comes up, which works as long as ZFS doesn't get corrupted.

    But I have only tried this in a virtual environment using this setup:
    • two virtual machines running with 1G of ram as Xen guests.
    • two ZFS mirrored virtual drives via hast devices.
    • CARP setup which is monitored by devd and that executes my script similar to the one in the article, with additions to start/stop nfsd.
    Also I'm having these errors which might break things:
    Code:
    Mar 29 10:01:01 storage1 hastd[6690]: [disk2] (primary) Remote request failed (Operation not supported by device): FLUSH.
    Mar 29 10:01:02 storage1 hastd[6690]: [disk2] (primary) Unable to flush disk cache on activemap update: Operation not supported by device.


    My idea was to apply this on real machines with raidz of 3 drives, which later would be expanded by an additional three drives. I'm wondering if anyone has used this setup on real machines and in production?
     
  11. mgiammarco

    mgiammarco New Member

    Messages:
    2
    Thanks Received:
    0
    Hello,
    I am interested too in this setup. I have tried it before with linux/pacemaker and I ask these questions:

    1) When I import/export a zfs from master to slave also nfs/cifs setup is imported/exported?

    2) Latency of slave server kills write performance (at least with linux/drbd). I plan to put on slave server a battery backupped ram hard disk. Can I tell zfs to use it as zil/log and always write on it, then later copy to zfs volume (slave hdds may for example on standby)

    3) Is hast stable?

    Thanks,
    Mario
     
  12. mgiammarco

    mgiammarco New Member

    Messages:
    2
    Thanks Received:
    0
    Sorry I made a mistake. ZFS does not replicate data to slave. I need that HAST can quickly copy data to a log and then to HDDs.
     
  13. gkontos

    gkontos Active Member

    Messages:
    1,395
    Thanks Received:
    246
    @mgiammarco,

    1) During a failover the resources change roles. This means that your storage becomes unavailable in machine#1 and available in machine#2. Please note that the resources can only be available to one machine only, the primary. This means that some services that depend on that data might complain. So, you might need to start /stop those servers as well.

    2) I don't understand.

    3) This is very difficult to answer. Why? Because until a technology is used enough, then there is not much of user feedback and error reporting.
     
  14. balboah

    balboah New Member

    Messages:
    5
    Thanks Received:
    0

    Narrowing down my issue:

    In my virtual Xen environment I get some kind of deadlock with state "tx->tx" and 99.8% idle if I do these steps, also all zfs commands stop working and I'm unable to import the pool again even after a reset of the guest machine:

    dd if=/dev/urandom of=./foo bs=100M count=10 &
    zpool export -f storage

    This only occurs when using HAST in between, not if I create the pool directly on the virtual drives. However it doesn't seem to occur on my real machine that I'm testing with now. Perhaps it's just a bug from using the virtual environment
     
  15. gkontos

    gkontos Active Member

    Messages:
    1,395
    Thanks Received:
    246
    This could be an expected behavior for ZFS running on top of a virtual machine given the COW nature. Did you allocate full space to the VMs before conducting the tests?
     
  16. balboah

    balboah New Member

    Messages:
    5
    Thanks Received:
    0
    Actually I get the same behaviour with "tx->tx" lockup for a long while on the real servers as well but that at least recovers from it. I haven't gotten the same issue yet of where it's impossible to import the pool again.

    However when split-brain occurs, hastctl says 1.8TB of "dirty" instead of 1-2G that is actually written in total. Is there a way around this?

    systat -io says tps: 500+ and about 60MB/s on all three drives. While network activity is going at 500KB/s and the dirty counter in hastctl isn't shrinking that fast either. What's causing all the disk activity?
     
  17. balboah

    balboah New Member

    Messages:
    5
    Thanks Received:
    0
    When I re-create the secondary that is.
     
  18. zennybsd

    zennybsd New Member

    Messages:
    161
    Thanks Received:
    7
    @gkontos: Great stuff you shared. :)

    In Linux, DRBD failover is possible with a single NIC, but

    1. How does it look with HAST with a single NIC with your configurations? What configuration changes are needed if it is possible in your configuration samples above?
    2. Could the two HAST nodes be in two different remote locations, I meant not in the same localnet?

    I know that a single NIC is not a failover option, but for the system which lacks expansion slots for NICs, one has to opt for a single NIC.
     
  19. gkontos

    gkontos Active Member

    Messages:
    1,395
    Thanks Received:
    246
    CARP uses only one NIC in this example. The second NIC is used for data replication. You could use only one NIC and just change the resource names.

    Don't forget that both servers should bind to the same IP address in CARP. This means that you would have to perform some sort of complex ROUTING.
     
    zennybsd thanks for this.
  20. zennybsd

    zennybsd New Member

    Messages:
    161
    Thanks Received:
    7
    @gkontos: Thanks!

    From what you said about CARP, it seems that HAST+CARP is good for storage scalability rather than redundancy, right?

    Generally, for enterprise grade operations are done in at least two datacenters keeping in mind if something happens (like fire, earthquake or flood etc.) to one datacenter, the IT operations will switchover to the other one in a different geographical location.

    Is there any solution of the kind with HAST+CARP or is it only local solution? In GNU/Linux, DRBD with Heartbeat/Corosync is able to do what I stated.

    Is that a possibility with HAST? Just curious!
     
  21. gkontos

    gkontos Active Member

    Messages:
    1,395
    Thanks Received:
    246
    Well, it is more for High Available Storage solutions. Meaning that I need my storage space always online.

    I don't think HAST would fit into a DR category solution yet. In that case, incremental snapshots would work better.

    It would if HAST were to support asynchronous replication. For the time being only fullsync is supported. I believe that DRDB uses asynchronous replication for long distance clusters.

    Also, CARP is not mandatory for HAST. If HAST could support async replication then that would work as a solution for DR replication.
     
    zennybsd thanks for this.
  22. zennybsd

    zennybsd New Member

    Messages:
    161
    Thanks Received:
    7
    DRBD + Heartbeat/Pacemaker or Corosync in GNU/Linux supports synchronous replication too. Maybe such a setup requires a fencing device for more effective implementation. Proxmox is a Debian-based distro which uses such approach (upto 1.9 without any fencing device but only with DRBD+Heartbeat and from 2.0 Proxmox uses DRBD with Corosync. A pretty robust enterprise grade solution. Just for information.
     
    gkontos thanks for this.
  23. tuaris

    tuaris New Member

    Messages:
    79
    Thanks Received:
    0
    My issues thus far with the ZFS + HAST + CARP + DEVD setup are during system startup and shutdown (related forum post is here: http://forums.freebsd.org/showthread.php?t=29996)

    Hast1 and hast2 are up, running, and properly replicating. Hast1's role is primary, hast2's role is secondary.

    Issue #1


    I pull the (power) plug on hast1 to simulate some type of failure.
    Hast2 takes over and everything is perfect.
    I put hast1 back into service (plug the power back in).
    FreeBSD boots up
    The CARP interface on hast1 switches to MASTER and hast2 switches to BACKUP
    hast2's role is now secondary
    hast1's role is stuck at init
    storage system is down :(

    The cause of this issue is fully explained in the related forum post. Basically it has to due with the fact that hastd isn't running yet. I can easily work around this issue by modifying the fail-over script (start hastd if it's not running), but that generates errors/warnings during boot up and is not as elegant as I want it to be.

    Issue #2


    I attempt a clean reboot or shutdown of hast1
    Hast1 hangs
    Hast2 never takes over
    storage system is down :(

    Not sure exactly what causes this issue, but it only happens when the role is primary. Some sources online point to a problem with ZFS and HAST. I have been unable to find a work around/fix for this.

    Any assistance would be appreciated, and by the way: net.inet.carp.preempt=0 on both hosts.
     
  24. gkontos

    gkontos Active Member

    Messages:
    1,395
    Thanks Received:
    246
    @tuaris

    Issue #1

    When my primary server comes back online it does not automatically assume a MASTER role in CARP.
    I have to manually issue on both nodes:

    #ifconfig carp0 down && ifconfig carp0 up

    Only then do they switch roles. This way I avoid split brain issues.

    Issue #2

    Very strange!
     
  25. tuaris

    tuaris New Member

    Messages:
    79
    Thanks Received:
    0
    Interesting, when I reboot either server regardless of it's current role, it always assumes the MASTER role in CARP.

    For example I have HostA and HostB...

    HostA:
    Code:
    carp0: flags=49<UP,LOOPBACK,RUNNING> metric 0 mtu 1500
            inet 1.2.3.4 netmask 0xffffff00
            nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
            carp: MASTER vhid 1 advbase 1 advskew 0


    HostB:
    Code:
    carp0: flags=49<UP,LOOPBACK,RUNNING> metric 0 mtu 1500
            inet 1.2.3.4 netmask 0xffffff00
            nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
            carp: BACKUP vhid 1 advbase 1 advskew 0


    I reboot HostB.

    HostA:

    Code:
    carp0: flags=49<UP,LOOPBACK,RUNNING> metric 0 mtu 1500
            inet 1.2.3.4 netmask 0xffffff00
            nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
            carp: BACKUP vhid 1 advbase 1 advskew 0
    


    HostB:

    Code:
    carp0: flags=49<UP,LOOPBACK,RUNNING> metric 0 mtu 1500
            inet 1.2.3.4 netmask 0xffffff00
            nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
            carp: MASTER vhid 1 advbase 1 advskew 0