How
to add disk to meta-set and extend the FS in Sun-cluster:
Procedure on how to add a new disk to an SVM which is
part of sun cluster 3.2 which indeed is used in to extend an existing
FS in SUN Cluster. Here is the scenario we have a FS
/export/zones/tst01/sapdata0 which is a soft partition of 2G we need
to extend the FS by another 10G for which we dont have a free space
to extend the FS. So we are going to add new lun here to this and
extend the FS.
#
df -h /export/zones/tst01/oracle_LT4/sapdata0
Filesystem size used avail capacity Mounted on
/dev/md/tst01_dg/dsk/d320 2.0G 3.2M 1.9G 1% /export/zones/s96stz02/oracle_LT4/sapdata0
Filesystem size used avail capacity Mounted on
/dev/md/tst01_dg/dsk/d320 2.0G 3.2M 1.9G 1% /export/zones/s96stz02/oracle_LT4/sapdata0
#
metastat –t
tst01_dg/d320: Soft Partition
Device: tst01_dg/d300
State: Okay
Size: 4194304 blocks (2.0 GB)
Extent Start Block Block count
0 40411488 4194304
tst01_dg/d300: Mirror
Submirror 0: tst01_dg/d301
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 492134400 blocks (234 GB)
tst01_dg/d301: Submirror of tst01_dg/d300
State: Okay
Size: 492134400 blocks (234 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
d41s0 0 No Okay No
Stripe 1:
Device Start Block Dbase State Reloc Hot Spare
d42s0 0 No Okay No
Stripe 2:
Device Start Block Dbase State Reloc Hot Spare
d43s0 0 No Okay No
Stripe 3:
Device Start Block Dbase State Reloc Hot Spare
d44s0 0 No Okay No
Stripe 4:
Device Start Block Dbase State Reloc Hot Spare
d49s0 0 No Okay No
Stripe 5:
Device Start Block Dbase State Reloc Hot Spare
d50s0 0 No Okay No
Stripe 6:
Device Start Block Dbase State Reloc Hot Spare
d51s0 0 No Okay No
Stripe 7:
Device Start Block Dbase State Reloc Hot Spare
d61s0 0 No Okay No
Stripe 8:
Device Start Block Dbase State Reloc Hot Spare
d62s0 0 No Okay No
Device Relocation Information:
Device Reloc Device ID
d41 No -
d42 No -
d43 No -
d44 No -
d49 No -
d50 No -
d51 No -
d61 No -
d62 No -
root@server101:/root :
tst01_dg/d320: Soft Partition
Device: tst01_dg/d300
State: Okay
Size: 4194304 blocks (2.0 GB)
Extent Start Block Block count
0 40411488 4194304
tst01_dg/d300: Mirror
Submirror 0: tst01_dg/d301
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 492134400 blocks (234 GB)
tst01_dg/d301: Submirror of tst01_dg/d300
State: Okay
Size: 492134400 blocks (234 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
d41s0 0 No Okay No
Stripe 1:
Device Start Block Dbase State Reloc Hot Spare
d42s0 0 No Okay No
Stripe 2:
Device Start Block Dbase State Reloc Hot Spare
d43s0 0 No Okay No
Stripe 3:
Device Start Block Dbase State Reloc Hot Spare
d44s0 0 No Okay No
Stripe 4:
Device Start Block Dbase State Reloc Hot Spare
d49s0 0 No Okay No
Stripe 5:
Device Start Block Dbase State Reloc Hot Spare
d50s0 0 No Okay No
Stripe 6:
Device Start Block Dbase State Reloc Hot Spare
d51s0 0 No Okay No
Stripe 7:
Device Start Block Dbase State Reloc Hot Spare
d61s0 0 No Okay No
Stripe 8:
Device Start Block Dbase State Reloc Hot Spare
d62s0 0 No Okay No
Device Relocation Information:
Device Reloc Device ID
d41 No -
d42 No -
d43 No -
d44 No -
d49 No -
d50 No -
d51 No -
d61 No -
d62 No -
root@server101:/root :
#
Metaset –s tst01_dg
Set name = tst01_dg, Set number = 1
Host Owner
server101 Yes
server102
Drive Dbase
d41 Yes <===========DID Device
d42 Yes <===========DID Device
d43 Yes <===========DID Device
d44 Yes <===========DID Device
d49 Yes <===========DID Device
d50 Yes <===========DID Device
d51 Yes <===========DID Device
d61 Yes <===========DID Device
d62 Yes <===========DID Device
Set name = tst01_dg, Set number = 1
Host Owner
server101 Yes
server102
Drive Dbase
d41 Yes <===========DID Device
d42 Yes <===========DID Device
d43 Yes <===========DID Device
d44 Yes <===========DID Device
d49 Yes <===========DID Device
d50 Yes <===========DID Device
d51 Yes <===========DID Device
d61 Yes <===========DID Device
d62 Yes <===========DID Device
So
here in the above metaset has DID devices in sun cluster this
provides an unique devices name for every disk.Since these luns
are being shared between nodes the same disk should be available when
the resource group is active on the other partner node during
emergency cases thats the reason we use DID device .The Information
about the DID device is in CCR (cluster Configuration Repository).
changes on this will be replicated among the cluster nodes .We will
come in more detail about this in our next writing queue. From
the above metastat and the metaset output we come to know that the
soft partition d320 is from mirror d300. so the mirror d300 is having
an concat d301 with the mentioned DID devices.
Step-1: Now request the storage team to allocate a LUN for the
system
Step-2:Now the storage team should give you the LUN id without this you cannot proceed further
so the LUN here is 60050766018500BE70000000000000FC
Step-2:Now the storage team should give you the LUN id without this you cannot proceed further
so the LUN here is 60050766018500BE70000000000000FC
Step-3:Now with this info check for the visibility of the LUN from both
the cluster nodes issue.
root@server101:/root
: echo |format |grep -i 60050766018500BE70000000000000FC
46. c360050766018500BE70000000000000FCd0 <IBM-2145-0000 cyl 10238 alt 2 hd 32 sec 64>
/scsi_vhci/ssd@g60050766018500be70000000000000fc
46. c360050766018500BE70000000000000FCd0 <IBM-2145-0000 cyl 10238 alt 2 hd 32 sec 64>
/scsi_vhci/ssd@g60050766018500be70000000000000fc
root@server102:/root
: echo |format |grep -i 60050766018500BE70000000000000FC
46. c360050766018500BE70000000000000FCd0 <IBM-2145-0000 cyl 10238 alt 2 hd 32 sec 64>
/scsi_vhci/ssd@g60050766018500be70000000000000fc
46. c360050766018500BE70000000000000FCd0 <IBM-2145-0000 cyl 10238 alt 2 hd 32 sec 64>
/scsi_vhci/ssd@g60050766018500be70000000000000fc
Just
incase the above LUN is not visible in the format output then follow
the below procedure since these LUN appear to the server via dynamic
reconfigurable hardware so here for LUN its fc-fabric switch so issue
this below command to find the fc-fabric
root@server101:/root : cfgadm -la |grep -i fabric
c1 fc-fabric connected configured unknown
c2 fc-fabric connected configured unknown
root@server101:/root :
root@server101:/root : cfgadm -C configure c1
root@server101:/root : cfgadm -C configure c2
Here we are changing the state of the c1 and c2 to reconfigure itself so that the new lun could be visible and be made use by solaris and again perform Step 3 It should appear now . If not then please let the storage team know about this cross check with them for proper zoning was done and after confirmation recheck and test again using the cfgadm utility.
root@server101:/root : cfgadm -la |grep -i fabric
c1 fc-fabric connected configured unknown
c2 fc-fabric connected configured unknown
root@server101:/root :
root@server101:/root : cfgadm -C configure c1
root@server101:/root : cfgadm -C configure c2
Here we are changing the state of the c1 and c2 to reconfigure itself so that the new lun could be visible and be made use by solaris and again perform Step 3 It should appear now . If not then please let the storage team know about this cross check with them for proper zoning was done and after confirmation recheck and test again using the cfgadm utility.
Step-4
:Format the DISK and LABEL it.
#format –d <disk-name. and then use label option.
#format –d <disk-name. and then use label option.
Step-5
:We need to create the DID device now. For this the DID database
needs to be updated thats in the CCR so we need to issue this command
on both the cluster nodes server101 and server102:
scdidadm -r
scdidadm -r
where
option r says we are reconfiguring the DID database . Never ever
manually edit the DID database file without SUN MICROSYSTEMS support
here what happens is it does an re-search on the device trees and it
assigns the identifiers for the device that was not recognized
before. After this command execution please reconfirm wheather DID
device is created or not:
root@server101:/root
: scdidadm -l |grep -i 60050766018500BE70000000000000FC
9 server101:/dev/rdsk/60050766018500BE70000000000000FCd0 /dev/did/rdsk/d9
root@server101:/root :
root@server102:/root : scdidadm -l |grep -i 60050766018500BE70000000000000FC
9 server102:/dev/rdsk/60050766018500BE70000000000000FCd0 /dev/did/rdsk/d9
root@server102:/root :
9 server101:/dev/rdsk/60050766018500BE70000000000000FCd0 /dev/did/rdsk/d9
root@server101:/root :
root@server102:/root : scdidadm -l |grep -i 60050766018500BE70000000000000FC
9 server102:/dev/rdsk/60050766018500BE70000000000000FCd0 /dev/did/rdsk/d9
root@server102:/root :
Step-6 :We need to update the global device namespace which is an sun
cluster 3.2 feature mounted on /global directory.
This
is visible to each node in cluster comprising links of physical
devices hence accessibility of the device is down on both the nodes
please run scgdevs global devices namespace administration script on
any of the node.
Step-7
:Now check for the disk path It should be monitored by cluster and
any failure of the disk path would cause a panic to the node.
Now
our DID device is d9
root@server101:/root : scdpm -p all |grep “d9″
server101:/dev/did/rdsk/d9 Fail
server102:/dev/did/rdsk/d9 Fail
root@server101:/root :
root@server101:/root : scdpm -p all |grep “d9″
server101:/dev/did/rdsk/d9 Fail
server102:/dev/did/rdsk/d9 Fail
root@server101:/root :
The
path of the disk is fail so we need to bring it to a proper valid
state Just un-monitor the disk path and re-monitor it back
again.
root@server101:/root :scdpm -u /dev/did/rdsk/d9
root@server101:/root :scdpm -u /dev/did/rdsk/d9
root@server101:/root
:scdpm -m /dev/did/rdsk/d9
root@server101:/root
: scdpm -p all |grep “d9″
server101:/dev/did/rdsk/d9 Ok
server102:/dev/did/rdsk/d9 Ok
server101:/dev/did/rdsk/d9 Ok
server102:/dev/did/rdsk/d9 Ok
Step-8:Add the DID device d9 to the diskset tst01_dg
now.
root@server101:/root :metaset -s tst01_dg -a /dev/did/dsk/d9
root@server101:/root :metaset -s tst01_dg -a /dev/did/dsk/d9
Step-9
:Once you add the DID device to the diskset it would automatically
reformat it to same vtoc info as that of other disk in the diskset.
Step-10
:Check the diskset out
root@server101:/root :metaset -s tst01_dg
Set name = tst01_dg, Set number = 1
Host Owner
server101 Yes
server102
Drive Dbase
d41 Yes
d42 Yes
d43 Yes
d44 Yes
d49 Yes
d50 Yes
d51 Yes
d61 Yes
d62 Yes
d6 Yes
d9 Yes <=====================New DID device is in place
root@server101:/root :metaset -s tst01_dg
Set name = tst01_dg, Set number = 1
Host Owner
server101 Yes
server102
Drive Dbase
d41 Yes
d42 Yes
d43 Yes
d44 Yes
d49 Yes
d50 Yes
d51 Yes
d61 Yes
d62 Yes
d6 Yes
d9 Yes <=====================New DID device is in place
Step-10
: Attach the DID device to the submirror and extend the FS as
shown:
root@server101:/root :metattach -s tst01_dg d301 /dev/did/dsk/d9s0
root@server101:/root :metattach -s tst01_dg d301 /dev/did/dsk/d9s0
Step-11
:Now grow the soft partition to the desired size
required…
root@server101:/root :growfs -M /export/zones/tst01/oracle/sapdata0 /dev/md/tst01_dg/rdsk/d320
/dev/md/rdsk/d320: Unable to find Media type. Proceeding with system determined parameters.
Warning: 9216 sector(s) in last cylinder unallocated
/dev/md/rdsk/d320: 2107392 sectors in 104 cylinders of 24 tracks, 848 sectors
1029.0MB in 26 cyl groups (4 c/g, 39.75MB/g, 19008 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 82288, 164544, 246800, 329056, 411312, 493568, 575824, 658080, 740336,
1316128, 1398384, 1480640, 1562896, 1645152, 1727408, 1809664, 1891920,
1974176, 2056432
root@server101:/root
root@server101:/root :growfs -M /export/zones/tst01/oracle/sapdata0 /dev/md/tst01_dg/rdsk/d320
/dev/md/rdsk/d320: Unable to find Media type. Proceeding with system determined parameters.
Warning: 9216 sector(s) in last cylinder unallocated
/dev/md/rdsk/d320: 2107392 sectors in 104 cylinders of 24 tracks, 848 sectors
1029.0MB in 26 cyl groups (4 c/g, 39.75MB/g, 19008 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 82288, 164544, 246800, 329056, 411312, 493568, 575824, 658080, 740336,
1316128, 1398384, 1480640, 1562896, 1645152, 1727408, 1809664, 1891920,
1974176, 2056432
root@server101:/root
root@server101:/root
: df -h /export/zones/tst01/oracle/sapdata0
Filesystem size used avail capacity Mounted on
/dev/md/tst01_dg/dsk/d320 11G 3.2M 10.9G 1% /export/zones/tst01/oracle/sapdata0
root@server101:/root :
Filesystem size used avail capacity Mounted on
/dev/md/tst01_dg/dsk/d320 11G 3.2M 10.9G 1% /export/zones/tst01/oracle/sapdata0
root@server101:/root :