Powered By Blogger

Thursday, January 20, 2011

Creating a Zone in Solaris 10

To view a list and status of currently installed zones:
------------------------------------------------------

# zoneadm list -vi

ID NAME STATUS PATH
0 global running /
1 jumpstart running /u01/zones/jumpstart


To create a new zone:
--------------------

# zonecfg -z
(if the zone has not been configured at all previously, you will receive:

No such zone configured
Use 'create' to begin configuring a new zone.
)
a full example of zone creation for a zone called 'zone1':
---------------------------------------------------------

# zonecfg -z zone1
zone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
zonecfg:zone1> set zonepath=/u01/zones/zone1
zonecfg:zone1> set autoboot=true
zonecfg:zone1> add fs
zonecfg:zone1:fs> set dir=/opt
zonecfg:zone1:fs> set special=/opt
zonecfg:zone1:fs> set type=lofs
zonecfg:zone1:fs> add options [ro,nodevices]
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> add net
zonecfg:zone1:net> set address=10.67.1.151/24
zonecfg:zone1:net> set physical=eri0
zonecfg:zone1:net> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit

# zoneadm -z zone1 install
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <1887> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <951> packages on the zone.
Initialized <951> packages on zone.
Zone is initialized.
Installation of <1> packages was skipped.
Installation of these packages generated warnings:
The file contains a log of the zone installation.
# zoneadm -z zone1 boot
# zlogin -e \@ -C zone1 # -e sets the escape sequence for console session
[Connected to zone 'zone1' console]

To Delete a Zone Permanently:
----------------------------

zoneadm -z halt
zoneadm -z uninstall
zonecfg -z delete

To Delete a zone in a weird state:
---------------------------------

If the install get interrupted, or the configuration has problems, the zone can end up in an incomplete
state. In this state, it is difficult to uninstall or delete, or continue the configuration. To remove
the incomplete zone and start fresh, do the following:

1. remove the zone entry in /etc/zones/index:

global:installed:/
zone1:installed:/u01/zones/zone1
zone2:installed:/u01/zones/zone2
zone3:incomplete:/u01/zones/zone3 <-----------

2. delete the xml file associated with the zone under /etc/zones

3. delete the directory associated with the zone (if it has been created)

How to install a Linux zone under Solaris 10

1. Make sure you have Solaris 10 for X86 Update 4 (or later) installed, as this supports Linux zones.

2. Obtain a distribution copy of CentOS or RedHat ES linux v3.5 to 3.8, and a copy of Adobe Reader v7 for Linux here

3. Install a zone as follows (use the appropriate values for your system):-

# mkdir -p /Zones/Linux
# chmod 700 /Zones/Linux
# zonecfg -z linux
linux: No such zone configured
Use 'create' to begin configuring a new zone.

zonecfg:linux> create -t SUNWlx
zonecfg:linux> add net
zonecfg:linux:net> set physical=bfe0
zonecfg:linux:net> set address=192.168.200.31
zonecfg:linux:net> end
zonecfg:linux> set zonepath=/Zones/Linux
zonecfg:linux> verify
zonecfg:linux> commit
zonecfg:linux> exit

# zoneadm -z linux install


Please insert any supported Linux distribution disc, or a
supported Linux distribution DVD in the removable media
drive and press .


4. Continue with installation until it completes, then boot the new linux zone:-

# zoneadm -z linux boot


5. Now log in to the new zone:-

# zlogin -C linux

How to Install solaris 10 zones using veritas file system.

0[root@testserver(global):~]# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c0t0d0
/pci@83,4000/FJSV,ulsa@2,1/sd@0,0
1. c0t1d0
/pci@83,4000/FJSV,ulsa@2,1/sd@1,0
2. c6t60060E8004EA68000000EA6800000788d0
/scsi_vhci/ssd@g60060e8004ea68000000ea6800000788
3. c6t60060E8004EA69000000EA690000312Ed0
/scsi_vhci/ssd@g60060e8004ea69000000ea690000312e
Specify disk (enter its number): 2
selecting c6t60060E8004EA68000000EA6800000788d0
[disk formatted]


FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
! - execute , then return
quit
format> p


PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
! - execute , then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 2238 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 2237 8.20GB (2238/0/0) 17187840
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 - wu 0 - 2237 8.20GB (2238/0/0) 17187840

/usr/lib/vxvm/bin/vxdisksetup -i Disk_1

vxdg init test_zone-001 adddisk=Disk_1

vxassist -g test_zone-001 make testzone 8g

mkfs -F vxfs /dev/vx/rdsk/test_zone-001/testzone

vi /etc/vfstab
/dev/vx/dsk/test_zone-001/testzone /dev/vx/rdsk/test_zone-001/testzone /zone/test-zone vxfs 1 yes -

mkdir -p /zone/test-zone

mount -a

df -h
zonecfg -z test-zone
zonecfg:test-zone>create
zonecfg:test-zone>set zonepath=/zone/test-zone
zonecfg:test-zone>verify
zonecfg:test-zone>commit
zonecfg:test-zone>exit
chmod 700 /zone/test-zone
zoneadm -z test-zone install
zoneadm -z test-zone boot

0[root@testserver(global):~]# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 test-zone running /zone/test-zone native shared

0[root@testserver(global):~]# zlogin test-zone
[Connected to zone 'test-zone' pts/1]
Last login: Thu Jan 20 05:12:46 on pts/1
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
# df -h
Filesystem size used avail capacity Mounted on
/ 8.0G 159M 7.4G 3% /
/dev 8.0G 159M 7.4G 3% /dev
/lib 24G 1.6G 22G 7% /lib
/platform 24G 1.6G 22G 7% /platform
/sbin 24G 1.6G 22G 7% /sbin
/usr 24G 1.6G 22G 7% /usr
proc 0K 0K 0K 0% /proc
ctfs 0K 0K 0K 0% /system/contract
mnttab 0K 0K 0K 0% /etc/mnttab
objfs 0K 0K 0K 0% /system/object
swap 9.0G 248K 9.0G 1% /etc/svc/volatile
fd 0K 0K 0K 0% /dev/fd
swap 9.0G 0K 9.0G 0% /tmp
swap 9.0G 0K 9.0G 0% /var/run

Wednesday, January 19, 2011

Crash Dump & Core Dump


Crash-dump : When an operating system has a fatal error, it generates a crash dump

Core-dump : When a process has a fatal error, it generates a core file.

Crash Dump Operation :
If a fatal operating system error occurs, the operating system prints a message to the console, describing the error. The operating system then generates a crash dump by writing some of the contents of the physical memory to a predetermined dump device, which must be a local disk slice. You can configure the dump device by using the dumpadm command. After the operating system has written the crash dump to the dump device, the system reboots. The crash dump is saved for future analysis to help determine the cause of the fatal error.


Command dumpadm  To display current crash dump configuration:
[root@testserver:/var/crash]# dumpadm
Dump content: kernel pages ---- [kernal memory pages]
Dump device: /dev/dsk/c8t0d0s7 (dedicated) ---[ Kernel memory will be dumped to dedicated as per our configuration]
Savecore directory: /var/crash/testserver --- [Crash dump generate two files(unix.x & vmcore.x) will be written in savecore directory]
Savecore enabled: yes


How to enable & disable saving crash dumpadm:

dumpadm -n -- [disable saving crash dumpadm]
0[root@testserver(global):/var/crash]# dumpadm
Dump content: kernel pages
Dump device: /dev/dsk/c8t0d0s7 (dedicated)
Savecore directory: /var/crash/testserver
Savecore enabled: no

dumpadm -y -- enable saving crash dumpadm
0[root@testserver(global):/var/crash]# dumpadm
Dump content: kernel pages
Dump device: /dev/dsk/c8t0d0s7 (dedicated)
Savecore directory: /var/crash/testserver
Savecore enabled: yes

How to modify the dump content

specify the 3 type of data to dump :
1. kernel --> To dump of all kernel memory
2. All --> To dump all of memory
3. curproc --> To dump kernel memory and current pages of the process whose thread was executing when the crash occurred

[root@testserver-zfs-test(global):~]# dumpadm -c all
Dump content: all pages
Dump device: /dev/dsk/c0t0d0s1 (swap)
Savecore directory: /dump
Savecore enabled: yes

[root@testserver:~]# dumpadm -c curproc
Dump content: kernel and current process pages
Dump device: /dev/dsk/c0t0d0s1 (swap)
Savecore directory: /dump
Savecore enabled: yes

[root@testserver:~]# dumpadm -c kernel
Dump content: kernel pages
Dump device: /dev/dsk/c0t0d0s1 (swap)
Savecore directory: /dump
Savecore enabled: yes

How to modify the dump device:

dumpadm -d /dev/dsk/c0t1d0s1
dumpadm -d swap
0[root@testserver(global):/var/crash]# dumpadm -d swap
Dump content: kernel pages
Dump device: /dev/dsk/c0t0d0s1 (swap)
Savecore directory: /var/crash/testserver
Savecore enabled: yes

0[root@testserver(global):/var/crash]# dumpadm -d /dev/dsk/c8t0d0s7
Dump content: kernel pages
Dump device: /dev/dsk/c8t0d0s7 (dedicated)
Savecore directory: /var/crash/testserver
Savecore enabled: yes

How to Examine a Crash Dump

/usr/bin/mdb [-k] crashdump-file
-k Specifies kernel debugging mode by assuming the file is an operating system crash dump file.


Core Dump Operation:

When a process terminates abnormally, it typically produces a core file. You can use the coreadm command to specify the name or location of core files produced by abnormally terminating processes.

Tuesday, January 18, 2011

Verify Solaris 10 Multipathing/Configure SAN Disk

I was attempting to troubleshoot issues as a user was complaining about slow performance on a SAN disk. First thing that I did was check to ensure that there were not any performance issues on any disks that might have been causing this users issues


A quick iostat verified that everything was looking fine
iostat -cxzn 1


This box is running Veritas so lets check out the disks. Vxdisk list shows one Sun6140 disk.

# vxdisk list
DEVICE TYPE DISK GROUP STATUS
Disk_0 auto:none - - online invalid
Disk_1 auto:none - - online invalid
SUN6140_0_1 auto:cdsdisk diskname_dg02 diskname_dg online nohotuse

Luxadm is an utility, which discovers FC devices (luxadm probe), shut downs devives (luxadm shutown_device ...) runs a firmware upgrade (luxadm download_firmware ...) and many other things. In this instance I use luxadm to get the true device name for my disk

# luxadm probe
No Network Array enclosures found in /dev/es

Found Fibre Channel device(s):
Node WWN:200600a0b829a7a0 Device Type:Disk device
Logical Path:/dev/rdsk/c4t600A0B800029A7A000000DC747A8168Ad0s2

I then run a luxadm on the device. Below you can see that I do indeed have two paths to the device.
1 controller = one path, 2 controllers = 2 paths

# luxadm display /dev/rdsk/c4t600A0B800029A7A000000DC747A8168Ad0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c4t600A0B800029A7A000000DC747A8168Ad0s2
Vendor: SUN
Product ID: CSM200_R
Revision: 0619
Serial Num: SG71009283
Unformatted capacity: 12288.000 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x1
Maximum prefetch: 0x1
Device Type: Disk device
Path(s):

/dev/rdsk/c4t600A0B800029A7A000000DC747A8168Ad0s2
/devices/scsi_vhci/ssd@g600a0b800029a7a000000dc747a8168a:c,raw
Controller /devices/pci@1f,4000/SUNW,qlc@5,1/fp@0,0
Device Address 203700a0b829a7a0,1
Host controller port WWN 210100e08bb370ab
Class secondary
State STANDBY
Controller /devices/pci@1f,4000/SUNW,qlc@5/fp@0,0
Device Address 203600a0b829a7a0,1
Host controller port WWN 210000e08b9370ab
Class primary
State ONLINE

Had I only had one path I would have run cfgadm. I would have seen that one of the fc-fabric devices would have been unconfigured. I then could have used cfgadm to configure it and enable my mulitpathing

# cfgadm
Ap_Id Type Receptacle Occupant Condition
c0 scsi-bus connected configured unknown
c1 scsi-bus connected unconfigured unknown
c2 fc-fabric connected configured unknown
c3 fc-fabric connected configured unknown



MPXIO Primer
Solaris I/O multipathing gives you the ability to set up multiple redundant paths to a storage system and gives you the benefits of load balancing and failover.

Need to enable MPXIO

Solaris 10 is the easier, because the mpxio capability is built-in. You just need to turn it on!

To enable it, edit the file /kernel/drv/fp.conf file. At the end it should say:

mpxio-disable="yes";Just change yes to no and it will be enabled:

mpxio-disable="no";Before multipathing, you should see two copies of each disk in format. Afterwards, you'll just see the one copy.

It assigns the next available controller ID, and makes up some horrendously long target number. For example:

Filesystem kbytes used avail capacity Mounted on /dev/dsk/c6t600C0FF000000000086AB238B2AF0600d0s5 697942398 20825341 670137634 4% /test

Finding WWN of HBA cards in Solaris 8, 9 and 10

bash-2.03# luxadm probe
No Network Array enclosures found in /dev/es

Found Fibre Channel device(s):
Node WWN:50070e800475e108 Device Type:Disk device
Logical Path:/dev/rdsk/c5t50060E800475D109d0s2
Node WWN:50070e800475e108 Device Type:Disk device
Logical Path:/dev/rdsk/c5t50060E800475D109d1s2
Node WWN:50070e800475e108 Device Type:Disk device
Logical Path:/dev/rdsk/c5t50060E800475D109d2s2
Node WWN:50070e800475e108 Device Type:Disk device
Logical Path:/dev/rdsk/c5t50060E800475D109d3s2
Node WWN:50070e800475e108 Device Type:Disk device
Logical Path:/dev/rdsk/c5t50060E800475D109d4s2
Node WWN:50070e800475e108 Device Type:Disk device
Logical Path:/dev/rdsk/c5t50060E800475D109d5s2
Node WWN:50070e800475e108 Device Type:Disk device
Logical Path:/dev/rdsk/c5t50060E800475D109d6s2
Node WWN:50070e800475e108 Device Type:Disk device
Logical Path:/dev/rdsk/c5t50060E800475D109d7s2

HBA card WWN

# prtconf -vp | grep wwn
port-wwn: 2100001b.3202f94b
node-wwn: 2000001b.3202f94b
port-wwn: 210000e0.8b90e795
node-wwn: 200000e0.8b90e795

#prtconf -vp | more

Node 0xf00e2f80
assigned-addresses: 81000810.00000000.00000300.00000000.00000100.82000814.00000000.00100000.00000000.00002000.82000830.00000000.00140000.00000000.00040000
version: ‘QLA2460 Host Adapter Driver(SPARC): 1.11 10/03/05′
manufacturer: ‘QLGC’
model: ‘QLA2460 ‘
name: ‘SUNW,qlc’
port-wwn: 2100001b.3202f94b
node-wwn: 2000001b.3202f94b
reg: 00000800.00000000.00000000.00000000.00000000.01000810.00000000.00000000.00000000.00000100.02000814.00000000.00000000.00000000.00001000
compatible: ‘pci1077,140.1077.140.2′ + ‘pci1077,140.1077.140′ + ‘pci1077,140′ + ‘pci1077,2422.2′ + ‘pci1077,2422′ + ‘pciclass,c0400′ + ‘pciclass,0400′
short-version: ’1.11 10/03/05′
#size-cells: 00000000
#address-cells: 00000002
device_type: ‘scsi-fcp’
fcode-rom-offset: 0000aa00
66mhz-capable:
fast-back-to-back:
devsel-speed: 00000001
latency-timer: 00000040
cache-line-size: 00000010
max-latency: 00000000
min-grant: 00000040
interrupts: 00000001
class-code: 000c0400
subsystem-id: 00000140
subsystem-vendor-id: 00001077
revision-id: 00000002
device-id: 00002422
vendor-id: 00001077

Node 0xf00ee398
#size-cells: 00000000
#address-cells: 00000004
reg: 00000000.00000000
device_type: ‘fp’
name: ‘fp’

Node 0xf00eeaa0
device_type: ‘block’
compatible: ‘ssd’
name: ‘disk’

Node 0xf00ef91c
assigned-addresses: 81001010.00000000.00000400.00000000.00000100.82001014.00000000.
version: ‘QLA2460 Host Adapter Driver(SPARC): 1.11 10/03/05′
manufacturer: ‘QLGC’
model: ‘QLA2460 ‘
name: ‘SUNW,qlc’
port-wwn: 210000e0.8b90e795
node-wwn: 200000e0.8b90e795
reg: 00001000.00000000.00000000.00000000.00000000.01001010.00000000.
compatible: ‘pci1077,140.1077.140.2′ + ‘pci1077,140.1077.140′ + ‘pci1077,140′ + ‘pci1077,2422.2′ + ‘pci1077,2422′ + ‘pciclass,c0400′ + ‘pciclass,0400′
short-version: ’1.11 10/03/05′
#size-cells: 00000000
#address-cells: 00000002
device_type: ‘scsi-fcp’
fcode-rom-offset: 0000aa00
66mhz-capable:
fast-back-to-back:
devsel-speed: 00000001
latency-timer: 00000040
cache-line-size: 00000010
max-latency: 00000000
min-grant: 00000040
interrupts: 00000001
class-code: 000c0400
subsystem-id: 00000140
subsystem-vendor-id: 00001077
revision-id: 00000002
device-id: 00002422
vendor-id: 00001077

Node 0xf00fad34
#size-cells: 00000000
#address-cells: 00000004
reg: 00000000.00000000
device_type: ‘fp’
name: ‘fp’

Node 0xf00fb43c
device_type: ‘block’
compatible: ‘ssd’
name: ‘disk’

For Solaris 8 and 9:
Run the following script to determine the WWNs of the HBAs that are currently being utilized:
#!/bin/sh for i in `cfgadm |grep fc-fabric|awk ‘{print $1}’`;

do

dev=”`cfgadm -lv $i|grep devices |awk ‘{print $NF}’`” wwn= \

“`luxadm -e dump_map $dev |grep ‘Host Bus’|awk ‘{print $4}’`”

echo “$i: $wwn” done

To show link status of card

bash-2.03# luxadm -e port

Found path to 2 HBA ports

/devices/ssm@0,0/pci@18,700000/SUNW,qlc@2/fp@0,0:devctl CONNECTED
/devices/ssm@0,0/pci@19,700000/SUNW,qlc@2/fp@0,0:devctl CONNECTED

To see the WWN’s (using address given to you from previous commands),

it is the last one that specifies it is a HBA, so the port WWN here is 50070e800475e108

bash-2.03# luxadm -e dump_map /devices/ssm@0,0/pci@18,700000/SUNW,qlc@2/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 642113 0 50070e800475e108 50070e800475e108 0×0 (Disk device)
1 643f13 0 550070e800475e108 50070e800475e108 0×0 (Disk device)
2 643913 0 2100001b3205e828 2000001b3205e828 0x1f (Unknown Type,Host Bus Adapter)

SAN Foundation Software versions display as such

bash-2.03# modinfo | grep SunFC
38 102bcd25 209b8 150 1 fcp (SunFC FCP v20070703-1.98)
39 102d4071 855c - 1 fctl (SunFC Transport v20070703-1.41)
42 102ead69 164e0 149 1 fp (SunFC Port v20070703-1.60)
44 10300a79 cd574 153 1 qlc (SunFC Qlogic FCA v20070212-2.19)

To show Sun/Qlogic HBA’s

bash-2.03# luxadm qlgc

Found Path to 2 FC100/P, ISP2200, ISP23xx Devices

Opening Device: /devices/ssm@0,0/pci@18,700000/SUNW,qlc@2/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter fcode version 1.16 11/15/06

Opening Device: /devices/ssm@0,0/pci@19,700000/SUNW,qlc@2/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter fcode version 1.16 11/15/06
Complete

To show all vendor HBA’s

bash-2.03# luxadm fcode_download -p

Found Path to 0 FC/S Cards
Complete

Found Path to 0 FC100/S Cards
Complete

Found Path to 2 FC100/P, ISP2200, ISP23xx Devices

Opening Device: /devices/ssm@0,0/pci@18,700000/SUNW,qlc@2/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter fcode version 1.16 11/15/06

Opening Device: /devices/ssm@0,0/pci@19,700000/SUNW,qlc@2/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter fcode version 1.16 11/15/06
Complete

Found Path to 0 JNI1560 Devices.
Complete

Found Path to 0 Emulex Devices.
Complete