Add New LUN via EMC powerpath

How to add a new LUN using EMC Navisphere and powerpath

Overview

  1. Create a new LUN on an EMC Clariion CX3-40 via Navisphere
  2. Assign LUN to proper Storage Group
  3. Use powermt to bring LUN in and configure it
  4. Rescan scsi bus on Linux server for new LUN
  5. Bring new LUN under LVM control and create file system

Create a new LUN on an EMC Clariion CX3-40 via Navisphere

  1. Login to Navisphere.
  2. Within the Enterprise Storage window, expand the Storage Domains tree until the RAID Groups are displayed

  3. Right click on the RAID Group that’s to contain the new LUN

  4. Choose the option Bind LUN...
  5. The window below will display. Fill it out as shown or with the desired options
    raidgroup
  6. Click Apply. Confirmation dialogs will appear.
    confirm1
    confirm2

Assign LUN to proper Storage Group

  1. Next, assign the LUN to the desired storage group by right clicking on the new LUN and choosing the option to “Add to Storage Group”
    add_storage_group
  2. Click OK and confirm.
    confirm3

Use powermt to bring LUN in and configure it

Now that the LUN has been created, it’s time to discover it on the server. Using the PowerPath Management Utility powermt, we accomplish this by running these commands:

# powermt config
# powermt display dev=all

If the new LUN is not showing, check that both paths are healthy with the powermt display command. If degraded, you may not be able to discover the new LUN. This happened to me and I had to reboot the machine to clear everything up, which returned both paths to an optimal state. I could have tried powremt restore which performs an I/O path check and will mark alive any previous paths marked as dead, but I took the easy way out ;)

Rescan scsi bus on Linux server for new LUN

If, after running powermt config the LUN is still not visible, you may rescan the bus:

~# for host in $(ls -1d /sys/class/fc_host/*); do echo "1" > ${host}/issue_lip; done
~# for host in $(ls -1d /sys/class/scsi_host/*); do echo "- - -" > ${host}/scan ; done

and try powermt config followed by a powermt display to see the new LUN.

Bring new LUN under LVM control and create file system

Once you know what block device is associated with the LUN as seen in the powermt display dev=all output, the following series of command will be used to bring the disk under LVM:

  • fdisk /dev/emcpowerag
  • pvcreate /dev/emcpowerag1
  • vgcreate vg_dwstore /dev/emcpowerag1
  • vgchange --addtag dwetlprod2 vg_dwstore
  • lvcreate -l 25599 -n lv_dwstore vg_dwstore
  • lvchange -ay vg_dwstore/lv_dwstore
  • mkfs.ext3 /dev/vg_dwstore/lv_dwstore
  • mkdir /dwstore
  • mount /dev/vg_dwstore/lv_dwstore /dwstore
[root@dwetlprod2 bin]# fdisk /dev/emcpowerag
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.


The number of cylinders for this disk is set to 13054.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): p

Disk /dev/emcpowerag: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p

Partition number (1-4): 1
First cylinder (1-13054, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-13054, default 13054):
Using default value 13054

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@dwetlprod2 bin]#
[root@dwetlprod2 bin]# pvcreate /dev/emcpowerag1
Physical volume "/dev/emcpowerag1" successfully created
[root@dwetlprod2 bin]# vgcreate vg_dwstore /dev/emcpowerag1
Volume group "vg_dwstore" successfully created

[root@dwetlprod2 bin]# pvdisplay /dev/emcpowerag1
--- Physical volume ---
PV Name /dev/emcpowerag1
VG Name vg_dwstore
PV Size 100.00 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 25599
Free PE 25599
Allocated PE 0
PV UUID VkvFFP-qBlo-1U6A-JDXS-nceB-GOCy-xpZH1R

[root@dwetlprod2 bin]#
[root@dwetlprod2 bin]# lvcreate -l 25599 -n lv_dwstore vg_dwstore
Failed to activate new LV.

!!! error “Failed to activate new LV”
I have host tags enabled in lvm.conf, so I must tag the volume group to that of the hostname!

[root@dwetlprod2 bin]# vgchange --addtag dwetlprod2 vg_dwstore
Volume group "vg_dwstore" successfully changed
[root@dwetlprod2 bin]# lvchange -ay vg_dwstore/lv_dwstore
[root@dwetlprod2 bin]#
[root@dwetlprod2 bin]# mkfs.ext3 /dev/vg_dwstore/lv_dwstore
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
13107200 inodes, 26213376 blocks
1310668 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
800 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@dwetlprod2 bin]# mkdir /dwstore
[root@dwetlprod2 bin]# mount /dev/vg_dwstore/lv_dwstore /dwstore
[root@dwetlprod2 bin]# df -Ph /dwstore
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_dwstore-lv_dwstore 99G 92M 94G 1% /dwstore

Put entry in /etc/fstab and it’s done!

Share