Main purpose of multipath connectivity is to provide redundant access to the storage devices, i.e to have access to the storage device when one or more of the components in a path fail. Another advantage of multipathing is the increased throughput by way of load balancing. Common example for the use of multipathing is a iSCSI SAN connected storage device. You have redundancy and maximum performance.
The common use case for this kind of storage system is for the shared storage between multiple servers. It could be for virtualiztion system like VMware ESXi with VMFS file system or just between Linux hosts using GFS or OCFS. This post is my experience with configuring iSCSI multipath on Ubuntu Server using OCFS2 file system to have the shared storage that two servers can access at the same time.
iSCSI multipath
Install the softwares
On Ubuntu, open-iscsi and multipath-tools are two packages needed for multipath iSCSI configuration. The packages are availabble in Ubuntu offical repo, we can install them directly with apt-get command.
After logging in successfully, we should able to see the new disk devices available in the OS. Let’s use fdisk -l command to validate them. In my example, there are 3 new devices which are sdc, sdd and sde.
On Ubuntu, the iSCSI feature is provided by open-iscsi service. We can adjust its configuration at /etc/iscsi/iscsid.conf. For example, we can make the target login process done automatically and adjust the replacement_timeout value
In above fdisk example, the 3 new disks have exact same size and represented by a disk named /dev/mapper/360002ac000000000000000030002426b. This device is created by multipath-tools. We can verify the iSCSI multipath status
Multipath config file is at /etc/multipath.conf. It is recommended to adjust the multipath config to have the device blacklist and naming. I normally blacklist all devices, and only allow specific devices using blacklist_exceptions. I also set the device name to mpath0 using alias directive.
In order to share the same disk between hosts, we cannot use the traditional file systems such as ext4, zfs etc. We need a shared disk file system. In this article, I am gonna use OCFS2, which is the second version of Oracle Cluster File System.
First of all, we need to install the package
1
$ sudo apt-get install ocfs2-tools
Now on the first node, we initialize the cluster with name ocfs2test and add the node members. In the following example, server1 and server2 and 2 ocfs nodes. Make sure they can resolve the name to ip addresses properly.
On every single node, configure ocfs2-tools using Debian dpkg-reconfigure command. All the options can be set as default values except you have to make it starts on boot and specify cluster name as we set in the previous step.