Powered By Blogger

Sunday, January 4, 2015

How to add disk to meta-set and extend the FS in Sun-cluster:

How to add disk to meta-set and extend the FS in Sun-cluster:

Procedure on how to add a new disk to an SVM which is part of sun cluster 3.2 which indeed is used in to extend an existing FS in SUN Cluster. Here is the scenario we have a FS /export/zones/tst01/sapdata0 which is a soft partition of 2G we need to extend the FS by another 10G for which we dont have a free space to extend the FS. So we are going to add new lun here to this and extend the FS.

# df -h /export/zones/tst01/oracle_LT4/sapdata0
Filesystem size used avail capacity Mounted on
/dev/md/tst01_dg/dsk/d320 2.0G 3.2M 1.9G 1% /export/zones/s96stz02/oracle_LT4/sapdata0
# metastat –t
tst01_dg/d320: Soft Partition
Device: tst01_dg/d300
State: Okay
Size: 4194304 blocks (2.0 GB)
Extent Start Block Block count
0 40411488 4194304
tst01_dg/d300: Mirror
Submirror 0: tst01_dg/d301
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 492134400 blocks (234 GB)
tst01_dg/d301: Submirror of tst01_dg/d300
State: Okay
Size: 492134400 blocks (234 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
d41s0 0 No Okay No
Stripe 1:
Device Start Block Dbase State Reloc Hot Spare
d42s0 0 No Okay No
Stripe 2:

Device Start Block Dbase State Reloc Hot Spare
d43s0 0 No Okay No
Stripe 3:

Device Start Block Dbase State Reloc Hot Spare
d44s0 0 No Okay No
Stripe 4:

Device Start Block Dbase State Reloc Hot Spare
d49s0 0 No Okay No
Stripe 5:

Device Start Block Dbase State Reloc Hot Spare
d50s0 0 No Okay No
Stripe 6:

Device Start Block Dbase State Reloc Hot Spare
d51s0 0 No Okay No
Stripe 7:

Device Start Block Dbase State Reloc Hot Spare
d61s0 0 No Okay No
Stripe 8:
Device Start Block Dbase State Reloc Hot Spare

d62s0 0 No Okay No
Device Relocation Information:
Device Reloc Device ID
d41 No -
d42 No -
d43 No -
d44 No -
d49 No -
d50 No -
d51 No -
d61 No -
d62 No -
root@server101:/root :

# Metaset –s tst01_dg
Set name = tst01_dg, Set number = 1
Host Owner
server101 Yes
server102
Drive Dbase
d41 Yes <===========DID Device
d42 Yes <===========DID Device
d43 Yes <===========DID Device
d44 Yes <===========DID Device
d49 Yes <===========DID Device
d50 Yes <===========DID Device
d51 Yes <===========DID Device
d61 Yes <===========DID Device
d62 Yes <===========DID Device


So here in the above metaset has DID devices in sun cluster this provides an unique devices name for every disk.Since these luns are being shared between nodes the same disk should be available when the resource group is active on the other partner node during emergency cases thats the reason we use DID device .The Information about the DID device is in CCR (cluster Configuration Repository). changes on this will be replicated among the cluster nodes .We will come in more detail about this in our next writing queueFrom the above metastat and the metaset output we come to know that the soft partition d320 is from mirror d300. so the mirror d300 is having an concat d301 with the mentioned DID devices.

Step-1: Now request the storage team to allocate a LUN for the system

Step-2:Now the storage team should give you the LUN id without this you cannot proceed further
so the LUN here is 60050766018500BE70000000000000FC
Step-3:Now with this info check for the visibility of the LUN from both the cluster nodes issue.

root@server101:/root : echo |format |grep -i 60050766018500BE70000000000000FC
46. c360050766018500BE70000000000000FCd0 <IBM-2145-0000 cyl 10238 alt 2 hd 32 sec 64>
/scsi_vhci/ssd@g60050766018500be70000000000000fc

root@server102:/root : echo |format |grep -i 60050766018500BE70000000000000FC
46. c360050766018500BE70000000000000FCd0 <IBM-2145-0000 cyl 10238 alt 2 hd 32 sec 64>
/scsi_vhci/ssd@g60050766018500be70000000000000fc

Just incase the above LUN is not visible in the format output then follow the below procedure since these LUN appear to the server via dynamic reconfigurable hardware so here for LUN its fc-fabric switch so issue this below command to find the fc-fabric

root@server101:/root : cfgadm -la |grep -i fabric
c1 fc-fabric connected configured unknown
c2 fc-fabric connected configured unknown
root@server101:/root :
root@server101:/root : cfgadm -C configure c1
root@server101:/root : cfgadm -C configure c2

Here we are changing the state of the c1 and c2 to reconfigure itself so that the new lun could be visible and be made use by solaris and again perform Step 3 It should appear now . If not then please let the storage team know about this cross check with them for proper zoning was done and after confirmation recheck and test again using the cfgadm utility.

Step-4 :Format the DISK and LABEL it.

#format –d <disk-name. and then use label option.
Step-5 :We need to create the DID device now. For this the DID database needs to be updated thats in the CCR so we need to issue this command on both the cluster nodes server101 and server102:

scdidadm -r

where option r says we are reconfiguring the DID database . Never ever manually edit the DID database file without SUN MICROSYSTEMS support here what happens is it does an re-search on the device trees and it assigns the identifiers for the device that was not recognized before. After this command execution please reconfirm wheather DID device is created or not:

root@server101:/root : scdidadm -l |grep -i 60050766018500BE70000000000000FC
9 server101:/dev/rdsk/60050766018500BE70000000000000FCd0 /dev/did/rdsk/d9
root@server101:/root :
root@server102:/root : scdidadm -l |grep -i 60050766018500BE70000000000000FC
9 server102:/dev/rdsk/60050766018500BE70000000000000FCd0 /dev/did/rdsk/d9
root@server102:/root :

Step-6 :We need to update the global device namespace which is an sun cluster 3.2 feature mounted on /global directory.
This is visible to each node in cluster comprising links of physical devices hence accessibility of the device is down on both the nodes please run scgdevs global devices namespace administration script on any of the node.

Step-7 :Now check for the disk path It should be monitored by cluster and any failure of the disk path would cause a panic to the node.
Now our DID device is d9
root@server101:/root : scdpm -p all |grep “d9″
server101:/dev/did/rdsk/d9 Fail
server102:/dev/did/rdsk/d9 Fail
root@server101:/root :

The path of the disk is fail so we need to bring it to a proper valid state Just un-monitor the disk path and re-monitor it back again.
root@server101:/root :scdpm -u /dev/did/rdsk/d9
root@server101:/root :scdpm -m /dev/did/rdsk/d9

root@server101:/root : scdpm -p all |grep “d9″
server101:/dev/did/rdsk/d9 Ok
server102:/dev/did/rdsk/d9 Ok

Step-8:Add the DID device d9 to the diskset tst01_dg now.
root@server101:/root :metaset -s tst01_dg -a /dev/did/dsk/d9
Step-9 :Once you add the DID device to the diskset it would automatically reformat it to same vtoc info as that of other disk in the diskset.

Step-10 :Check the diskset out
root@server101:/root :metaset -s tst01_dg
Set name = tst01_dg, Set number = 1
Host Owner
server101 Yes
server102
Drive Dbase
d41 Yes
d42 Yes
d43 Yes
d44 Yes
d49 Yes
d50 Yes
d51 Yes
d61 Yes
d62 Yes
d6 Yes
d9 Yes <=====================New DID device is in place

Step-10 : Attach the DID device to the submirror and extend the FS as shown:
root@server101:/root :metattach -s tst01_dg d301 /dev/did/dsk/d9s0

Step-11 :Now grow the soft partition to the desired size required…
root@server101:/root :growfs -M /export/zones/tst01/oracle/sapdata0 /dev/md/tst01_dg/rdsk/d320
/dev/md/rdsk/d320: Unable to find Media type. Proceeding with system determined parameters.
Warning: 9216 sector(s) in last cylinder unallocated
/dev/md/rdsk/d320: 2107392 sectors in 104 cylinders of 24 tracks, 848 sectors
1029.0MB in 26 cyl groups (4 c/g, 39.75MB/g, 19008 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 82288, 164544, 246800, 329056, 411312, 493568, 575824, 658080, 740336,
1316128, 1398384, 1480640, 1562896, 1645152, 1727408, 1809664, 1891920,
1974176, 2056432
root@server101:/root

root@server101:/root : df -h /export/zones/tst01/oracle/sapdata0
Filesystem size used avail capacity Mounted on
/dev/md/tst01_dg/dsk/d320 11G 3.2M 10.9G 1% /export/zones/tst01/oracle/sapdata0
root@server101:/root :


How to Create a Stripe Volume in VXVM

How to Create a Stripe Volume in VXVM

To create a striped volume, you need to add the layout type and other attributes to vxassist make command.

vxassist [-g diskgroup] make volume_name length layout=stripe ncol=3 stripeunit=size [disks...]


We are going to create the stripe volume under adg diskgroup. Need to check the disk space under diskgroup.

# vxdg -g adg free
DISK DEVICE TAG OFFSET LENGTH FLAGS
disk5 c1t9d0s2 c1t9d0 0 6205440 -
disk6 c1t10d0s2 c1t10d0 0 6201344 -
disk7 c1t11d0s2 c1t11d0 0 6201344 -

# vxassist -g adg maxsize ncol=3
Maximum volume size: 18604032 (9084Mb)
bash-3.00#

# vxassist -g adg make oradata 9g layout=stripe disk5 disk6 disk7
VxVM vxassist ERROR V-5-1-435 Cannot allocate space for 18874368 block volume

# vxassist -g adg make oradata 8g layout=stripe disk5 disk6 disk7

# mkfs -F vxfs /dev/vx/rdsk/adg/oradata
version 7 layout
16777216 sectors, 8388608 blocks of size 1024, log size 16384 blocks
largefiles supported

# mkdir /oradata

# mount -F vxfs /dev/vx/dsk/adg/oradata /oradata

# df -h /oradata
Filesystem size used avail capacity Mounted on
/dev/vx/dsk/adg/oradata
8.0G 19M 7.5G 1% /oradata
# vxassist -g adg maxsize ncol=3
Maximum volume size: 1824768 (891Mb)
bash-3.00#



How to re-size the Stripe Volume in VXVM:

Volume Manager has the following internal restrictions regarding the extension of striped volume columns:
  • Device(s) used in one column cannot be used in any other columns in that volume.
  • All stripe columns must be grown in parallel.

Use the following commands to determine if you have enough devices or free space to grow your volume.

# df -h /oradata
Filesystem size used avail capacity Mounted on
/dev/vx/dsk/adg/oradata
8.0G 19M 7.5G 1% /oradata

# vxassist -g adg maxgrow oradata ncol=3
Volume oradata can be extended by 1826816 to: 18604032 (9084Mb)


# vxassist -g adg maxsize ncol=3
Maximum volume size: 1824768 (891Mb)
bash-3.00#

# vxprint -htqg adg oradata
v oradata - ENABLED ACTIVE 16777216 SELECT oradata-01 fsgen
pl oradata-01 oradata ENABLED ACTIVE 16777344 STRIPE 3/128 RW
sd disk5-01 oradata-01 disk5 0 5592448 0/0 c1t9d0 ENA
sd disk6-01 oradata-01 disk6 0 5592448 1/0 c1t10d0 ENA
sd disk7-01 oradata-01 disk7 0 5592448 2/0 c1t11d0 ENA
bash-3.00#

The above volume is a 3 column stripe volume. You can determine this by examining the plex line following STRIPE where you can see 3/128. This value is shown in COLUMNS/STRIPE_WIDTH format.

Attempting to grow this volume using only the currently available devices will produce the following error:

# vxassist -g adg maxsize ncol=3
Maximum volume size: 1824768 (891Mb)

# /etc/vx/bin/vxresize -g adg oradata +891m ncol=3

# vxassist -g adg maxgrow oradata ncol=3
Volume oradata can be extended by 2048 to: 18604032 (9084Mb)
# /etc/vx/bin/vxresize -g adg oradata +2048 ncol=3

# df -h /oradata
Filesystem size used avail capacity Mounted on
/dev/vx/dsk/adg/oradata
8.9G 19M 8.3G 1% /oradata

You can also predetermine how much space Volume Manage can extend your volume by using the following command:

# vxassist -g adg maxgrow oradata ncol=3
VxVM vxassist ERROR V-5-1-1178 Volume oradata cannot be extend within the given constraints
bash-3.00#


Because VXVM requires a unique device for each stripe, and there is only one device available for the three column volume, the grow operation cannot run. To resolve this issue you must add enough storage devices to satisfy the above constraints or use a re-layout operation to convert the volume's column count. For additional information on performing a relayout operation see the supplemental material below.


# vxdg -g adg adddisk disk8=c1t12d0

# vxprint -d -g adg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dm disk5 c1t9d0s2 - 6205440 - - - -
dm disk6 c1t10d0s2 - 6201344 - - - -
dm disk7 c1t11d0s2 - 6201344 - - - -
dm disk8 c1t12d0s2 - 6205440 - - - -
bash-3.00# vxassist -g adg maxsize ncol=3
VxVM vxassist ERROR V-5-1-752 No volume can be created within the given constraints

In the example, one additional devices have been added to the disk group:

# vxdg -g adg adddisk disk9=c1t13d0

# vxprint -d -g adg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dm disk5 c1t9d0s2 - 6205440 - - - -
dm disk6 c1t10d0s2 - 6201344 - - - -
dm disk7 c1t11d0s2 - 6201344 - - - -
dm disk8 c1t12d0s2 - 6205440 - - - -
dm disk9 c1t13d0s2 - 6205440 - - - -

# vxassist -g adg maxsize ncol=3
Maximum volume size: 12288 (6Mb)

We have only 6MB space to increase the ncol=3 stripe volume. This is not the sufficient space to increase the Filesystem and one more additional devices have been added to the disk group to increase FS to 17 GB

And the resize operation completes without complaint:

# vxassist -g adg maxsize ncol=3

Maximum volume size: 18616320 (9090Mb)

# vxdg -g adg adddisk disk10=c1t14d0

# vxprint -d -g adg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dm disk5 c1t9d0s2 - 6205440 - - - -
dm disk6 c1t10d0s2 - 6201344 - - - -
dm disk7 c1t11d0s2 - 6201344 - - - -
dm disk8 c1t12d0s2 - 6205440 - - - -
dm disk9 c1t13d0s2 - 6205440 - - - -
dm disk10 c1t14d0s2 - 6205440 - - - -
# vxassist -g adg maxgrow oradata ncol=3
Volume oradata can be extended by 18616320 to: 37220352 (18174Mb)

# df -h /oradata
Filesystem size used avail capacity Mounted on
/dev/vx/dsk/adg/oradata
8.9G 19M 8.3G 1% /oradata

# /etc/vx/bin/vxresize -g adg oradata +8g

# df -h /oradata
Filesystem size used avail capacity Mounted on
/dev/vx/dsk/adg/oradata
17G 21M 16G 1% /oradata

v oradata - ENABLED ACTIVE 35381248 SELECT oradata-01 fsgen
pl oradata-01 oradata ENABLED ACTIVE 35381376 STRIPE 3/128 RW
sd disk5-01 oradata-01 disk5 0 6205440 0/0 c1t9d0 ENA
sd disk10-01 oradata-01 disk10 0 5588352 0/6205440 c1t14d0 ENA
sd disk6-01 oradata-01 disk6 0 6201344 1/0 c1t10d0 ENA
sd disk8-01 oradata-01 disk8 0 5592448 1/6201344 c1t12d0 ENA
sd disk7-01 oradata-01 disk7 0 6201344 2/0 c1t11d0 ENA
sd disk9-01 oradata-01 disk9 0 5592448 2/6201344 c1t13d0 ENA