How To Rescan Linux for a New LUN

How to rescan Linux for newly presented LUNs

This article focuses on utilizing the lun_scan utility provided by Emulex No-Reboot Dynamic Target/LUN Discovery Tool. This tool is part of the Emulex Drivers for Linux.

To perform scans manually, the following set of commands may be issued.

# ls -1 /sys/class/fc_host
total 0
drwxr-xr-x 3 root root 0 Jul 9 02:37 host0
drwxr-xr-x 3 root root 0 Jul 9 02:37 host1
# echo
# echo "1" > /sys/class/fc_host/host1/issue_lip
# echo "1" > /sys/class/fc_host/host2/issue_lip
# echo "- - -" > /sys/class/scsi_host/host1/scan
# echo "- - -" > /sys/class/scsi_host/host2/scan
TIP You can also run this through a for loop
for host in $(ls -1d /sys/class/fc_host/*); do echo "1" > ${host}/issue_lip; done
for host in $(ls -1d /sys/class/scsi_host/*); do echo "- - -" > ${host}/scan ; done

After running the commands which perform a rescan, you may see if any newly discovered LUNs are now part of the system by looking at:

cat /proc/scsi/scsi

Once there, bring them into linux dm-multipath by issuing:

multipath -v1
multipath -v2

The multipath -v2 command prints out multipathed paths that show which devices are multipathed. If the command does not print anything out, ensure that all SAN connections are set up properly and the system is multipathed.

multipath -ll

Here are my cliff notes from when I installed a pair of Emulex HBAs, created LUNs on the EMC and brought them into Linux. I’ve getting ready to go on vacation, so I’m placing my notes out here verbatim with the intention of coming back to format these into a more knowledge sharing format. Therefore, these are my notes and may not be very descriptive for the purposes of knowlede sharing. If there are any questions, feel free to contact me.

How to present EMC LUNs to RHEL using dm-multipath

  1. perform zoning using connectrix
  2. install HostAgent on RHEL system and start it up
  3. In Navisphere:
    • Create a new storage group
    • right click on the newly created storage group and
      • Connect Hosts
      • Select LUNs
  4. yum install device-mapper-multipath device-mapper-multipath-libs multipath-tools dm-devel pciutils kernel-devel make gcc rpm-build redhat-rpm-config
EDS etlprod2 ~ # multipath -v3 -d
Mar 28 15:57:51 | DM multipath kernel driver not loaded
Mar 28 15:57:51 | DM multipath kernel driver not loaded

EDS etlprod2 ~ # chkconfig multipathd on
EDS etlprod2 ~ # /etc/init.d/multipathd start
EDS etlprod2 ~ # multipath -v3 -d
get wwid of sda device and change multipath.conf to

defaults {
user_friendly_names yes
path_grouping_policy multibus
path_checker emc_clariion
hwtable_regex_match yes
}

blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^cciss!c[0-9]d[0-9]*"
devnode "^sda[[0-9]*]"
wwid 36782bcb02255b5001895ca3f07c57353
}

devices {
device {
vendor "DGC"
product ".*"
product_blacklist "LUNZ"
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
#prio_callout "/sbin/mpath_prio_emc /dev/%n"
features "1 queue_if_no_path"
hardware_handler "1 emc"
path_selector "round-robin 0"
path_grouping_policy group_by_prio
failback immediate
rr_weight uniform
no_path_retry 60
rr_min_io 1000
path_checker emc_clariion
prio emc
}
}

Run

  • lun_scan.sh
  • multipath -ll

Right click on Host in storage group and choose “Update Now”

This will put the device name at the end of each LUN

mpath are in /dev/mapper

!!! tip “create aliases flat file”

EDS etlprod2 emc # cat aliases 
LUN 24 etlprod2 metabackupvg-backup2lv  RAID Group 13   70 GB   mpatho
LUN 25 etlprod2 metabackupvg-backuplv   RAID Group 13   45 GB   mpathe
LUN 26 etlprod2 metadatavg-data01lv     RAID Group 12   11 GB   mpathf
LUN 27 etlprod2 metadatavg-data02lv     RAID Group 12   11 GB   mpathh
LUN 28 etlprod2 metadatavg-data03lv     RAID Group 12   11 GB   mpathi
LUN 29 etlprod2 metadatavg-data04lv     RAID Group 12   11 GB   mpathn
LUN 30 etlprod2 metadatavg-data05lv     RAID Group 11   20 GB   mpathk
LUN 31 etlprod2 metadatavg-data06lv     RAID Group 11   20 GB   mpathg
LUN 32 etlprod2 metadatavg-data07lv     RAID Group 11   14 GB   mpathc
LUN 33 etlprod2 metalogvg-arch2lv       RAID Group 10   14 GB   mpathj
LUN 34 etlprod2 metalogvg-archlv        RAID Group 10   5 GB    mpathl
LUN 35 etlprod2 metalogvg-log01lv       RAID Group 10   3 GB    mpathd
LUN 36 etlprod2 metalogvg-log02lv       RAID Group 10   3 GB    mpathb
LUN 37 etlprod2 srcdata2vg-srcdata2lv   RAID Group 9    50 GB   mpathm

Create a script to parse the above aliases file

EDS etlprod2 emc # cat map_wwid_to_lun.sh 
#!/bin/bash

echo "multipaths {"
for device in $(cat aliases | awk '{ print $10 }')
do

wwid=`scsi_id -g /dev/mapper/$device`
alias=`grep $device aliases | awk '{ print $4 }'`

#multipath -ll | grep "$wwid"

echo " multipath {
wwid $wwid
alias $alias
}"
done
echo "}"

Add the output to multipath.conf, and run the following commands:

multipath reload
multipath -ll

Use LVM to create Logical Volumes and Volume Groups

Make sure set filter exists in the lvm config file lvm.conf

set filter

FDISK

!!! note
set starting block at 128 for EMC

EDS etlprod2 ~ # FDISK_CMDLIST="n\np\n1\n\n\nt\n8e\nx\nb\n1\n128\nw\n"
EDS etlprod2 ~ # for device in $(lvmdiskscan | grep mapper | awk '{ print $1 }' | sort -n); do echo -e -n "${FDISK_CMDLIST}" | ( fdisk $device ); done

PVCREATE

!!! quote “Example”
pvcreate /dev/mapper/auditlogvg-log0[123]p1)

EDS etlprod2 ~ # for device in $(lvmdiskscan | grep p1 | awk '{ print $1 }' | sort -n); do  pvcreate $device; done
Physical volume "/dev/mapper/metabackup-backup01p1" successfully created
Physical volume "/dev/mapper/metabackup-backup02p1" successfully created
Physical volume "/dev/mapper/metadata-data01p1" successfully created
Physical volume "/dev/mapper/metadata-data02p1" successfully created
Physical volume "/dev/mapper/metadata-data03p1" successfully created
Physical volume "/dev/mapper/metadata-data04p1" successfully created
Physical volume "/dev/mapper/metadata-data05p1" successfully created
Physical volume "/dev/mapper/metadata-data06p1" successfully created
Physical volume "/dev/mapper/metadata-data07p1" successfully created
Physical volume "/dev/mapper/metalog-arch01p1" successfully created
Physical volume "/dev/mapper/metalog-arch02p1" successfully created
Physical volume "/dev/mapper/metalog-log01p1" successfully created
Physical volume "/dev/mapper/metalog-log02p1" successfully created
Physical volume "/dev/mapper/srcdata02p1" successfully created

VGCREATE

!!! quote “Example”
vgcreate vg_auditlog /dev/mapper/auditlogvg-log0[123]p1)

EDS etlprod2 ~ # for vg in $(lvmdiskscan | grep p1 | awk '{ print $1 }' | awk -F/ '{ print $4 }' | sed 's/-.*$//' | sed 's/02p1//' | sort -n | uniq); do list=`ls /dev/mapper/$vg*p1`; vgcreate vg_$vg $list; done
Volume group "vg_metabackup" successfully created
Volume group "vg_metadata" successfully created
Volume group "vg_metalog" successfully created
Volume group "vg_srcdata" successfully created

PARTPROBE

Use partprobe to bring devices into /dev/mapper

!!! question “Why partprobe?”
http://www.redhat.com/advice/tips/rhce/partprobe.html

One Achilles heel for Linux, until the past couple of years, has been the fact that the Linux kernel only reads partition table information at system initialization, necessitating a reboot any time you wish to add new disk partitions to a running system.
The good news, however, is that disk re-partitioning can now also be handled ‘on-the-fly’ thanks to the ‘partprobe’ command, which is part of the ‘parted’ package.

Using ‘partprobe’ couldn’t be more simple. Any time you use ‘fdisk’, ‘parted’ or any other favorite partitioning utility you may have to modify the partition table for a drive, run ‘partprobe’ after you exit the partitioning utility and ‘partprobe’ will let the kernel know about the modified partition table information. If you have several disk drives and want to specify a specific drive for ‘partprobe’ to scan, you can run ‘partprobe <device_node>’

Of course, given a particular hardware configuration, shutting down your system to add hardware may be unavoidable, it’s still nice to be given the option of not having to do so and ‘partprobe’ fills that niche quite nicely.”

!!! info “RHEL 6 and partprobe”
https://access.redhat.com/solutions/57542

partprobe was commonly used in RHEL 5 to inform the OS of partition table changes on the disk. In RHEL 6, it will only trigger the OS to update the partitions on a disk that none of its partitions are in use (e.g. mounted). If any partition on a disk is in use, partprobe will not trigger the OS to update partitions in the system because it is considered unsafe in some situations.

So in general we would suggest:

  1. Unmount all the partitions of the disk before modifying the partition table on the disk, and then run partprobe to update the partitions in system.
  2. If this is not possible (e.g. the mounted partition is a system partition), reboot the system after modifying the partition table. The partitions information will be re-read after reboot.

If a new partition was added and none of the existing partitions were modified, consider using the partx command to update the system partition table. Do note that the partx command does not do much checking between the new and the existing partition table in the system and assumes the user knows what they are are doing. So it can corrupt the data on disk if the existing partitions are modified or the partition table is not set correctly. So use at one’s own risk.

For example, a partition #1 is an existing partition and a new partition #2 is already added in /dev/sdb by fdisk. Here we use partx -v -a /dev/sdb to add the new partition to the system:

# ls /dev/sdb*   /dev/sdb  /dev/sdb1

List the partition table of disk:
   # partx -l /dev/sdb
   # 1:        63-   505007 (   504945 sectors,    258 MB)
   # 2:    505008-  1010015 (   505008 sectors,    258 MB)
   # 3:         0-       -1 (        0 sectors,      0 MB)
   # 4:         0-       -1 (        0 sectors,      0 MB)

Read disk and try to add all partitions to the system:
   # partx -v -a /dev/sdb
   device /dev/sdb: start 0 size 2097152
   gpt: 0 slices
   dos: 4 slices
   # 1:        63-   505007 (   504945 sectors,    258 MB)
   # 2:    505008-  1010015 (   505008 sectors,    258 MB)
   # 3:         0-       -1 (        0 sectors,      0 MB)
   # 4:         0-       -1 (        0 sectors,      0 MB)
   BLKPG: Device or resource busy
   error adding partition 1

The last 2 lines are normal in this case because partition 1 is already added in the system before partition 2 is added

Check that we have device nodes for /dev/sdb itself and the partitions on it:

# ls /dev/sdb* /dev/sdb /dev/sdb1 /dev/sdb2

LVCREATE

(options)
-L <size in M or G (read man page)>
or
-l <pe divided by number of physical volumes to be included in said vg from vgdisplay <vg> or as defined per physical volume if different sizes for each pv is desired>

lvcreate -l 766 -n lv_log01 vg_auditlog

create file systems and mount

for vg in $(vgs --noheadings --options vg_name | grep -v etlprod1); do for lv in $(ls -1 /dev/$vg); do echo "mkfs.ext4 /dev/$vg/$lv"; done; done
for vg in $(vgs --noheadings --options vg_name | grep -v etlprod1); do for lv in $(ls -1 /dev/$vg); do mkfs.ext4 /dev/$vg/$lv; done; done
for vg in $(vgs --noheadings --options vg_name | grep -v etlprod1); do for lv in $(ls -1 /dev/$vg); do echo "/dev/$vg/$lv <mpoint>                    ext4    defaults        1 2"; done; done

edit fstab

for fs in $(mount | grep mapper | grep -v etlprod1 | awk '{ print $3 }'); do chown oracle:dba $fs; done
for fs in $(mount | grep mapper | grep -v etlprod1 | awk '{ print $3 }'); do ls -ld $fs; done

One day, I hope to clean this up a bit and layout the article in a more knowlede sharing format. As you can see, there is a bit of leg work involved in setting things up and I remember spending a few hours getting this all setup and figured things out as I went along, hence the cliff note. Again, it’s here for my benefit in reference material, and to share with anyone who may be performing a simliar setup.

Share