Monday, August 25, 2008

How to Rename a Solaris Zone?

A few days back i had a need to rename my Solaris zones from "orazone" to "oraprodzone". I followed the below steps to successfully rename my zone's name.

STEP 1: Shutdown the zone "orazone"

Issue the following commands from the globalzone to shutdown orazone.

globalzone# zoneadm list -iv
ID NAME STATUS PATH
0 global running /
2 orazone running /zones/orazone
globalzone# zoneadm -z orazone halt
globalzone# zoneadm list -iv
ID NAME STATUS PATH
0 global running /
- orazone installed /zones/orazone
globalzone#

STEP 2: Rename the Zone from "orazone" to "oraprodzone"

Enter zone configuration from the global zone using the below mentioned commands.

globalzone# zonecfg -z orazone
zonecfg:orazone> set zonename=oraprodzone
zonecfg:orazone> commit
zonecfg:orazone> exit

globalzone# zoneadm list -vc
ID NAME STATUS PATH BRAND
0 global running / native
- oraprodzone installed /zones/orazone native

STEP 3: Boot the zone

After you have made the above changes, boot the zone from the global zone using the below commands.

globalzone# zoneadm -z oraprodzone boot
globalzone# zoneadm list -iv

ID NAME STATUS PATH
0 global running /
2 orazone running /zones/orazone

Done!

There is another way to rename a zone (not supported, but it worked for me), but then that's not the right one though. However, i would mention that as well.

Renaming zone orazone to oraprodzone

Perform all of the below as root of global zone.
First shutdown your orazone zone

globalzone# zoneadm -z orazone halt
globalzone# vi /etc/zones/index

change orazone to oraprodzone

globalzone#
cd /etc/zones
globalzone# mv orazone.xml oraprodzone.xml
globalzone# vi oraprodzone.xml

change orazone to oraprodzone

globalzone#
cd /zones
-/zones is where I have stored all the zones

globalzone#
mv orazone oraprodzone

-cd to your new zone (/zones/oraprodzone)and modify /etc/hosts, /etc/nodename, /etc/hostname.xxx

globalzone#
cd /zones/oraprodzone/root/etc

-boot new renaming zone
globalzone# zoneadm -z oraprodzone boot

Feel free to leave a comment :)

Tuesday, August 5, 2008

Password Securing Guide - Solaris

Hello All, I am being often criticized for using very cryptic passwords on my systems which has multiple combination's of numeric and special characters. But in really speaking, its indeed a good practice to maintain complex passwords on your systems, so that they cant be easily guessed, cant be breaking into using some silly dictionary attack tool.

If you guys are not aware of, let me tell you this - (sometimes you know something that takes you by surprise and you tell yourself "How come i didn't already know this?") Solaris systems by default still maintain traditional salted crypt passwords (called default crypt_unix(5) algorithm). Take a a closer look at /etc/shadow file and you would see something like this -

vishal:bwtNbxhjKdK7k:13223::::::

The field "bwtNbxhjKdK7k" is nothing but your salted crypt password and this is the default out of the box password format for Solaris and you would be surprized to know that the length of these passwords cannot exceed 8 characters. So if you typed your password as "barackobama", then your effective password is "barackob" ONLY.Try it for yourself once to know what i am talking about.

Ok now your next question would be, How should you go about fixing this? Well its pretty simple and straight forward. All you are suppose to do is change your password scheme. Solaris by default (out-of-the-box) provides 4 such schemes for you to choose from. Do a cat /etc/security/policy.conf

my-server # cat /etc/security/policy.conf
#
# Copyright 1999-2002 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# /etc/security/policy.conf
#
# security policy configuration for user attributes. see policy.conf(4)
#
#ident "@(#)policy.conf 1.6 02/06/19 SMI"
#
AUTHS_GRANTED=solaris.device.cdrw
PROFS_GRANTED=Basic Solaris User

# crypt(3c) Algorithms Configuration
#
# CRYPT_ALGORITHMS_ALLOW specifies the algorithms that are allowed to
# be used for new passwords. This is enforced only in crypt_gensalt(3c).
#
CRYPT_ALGORITHMS_ALLOW=1,2a,md5

# To deprecate use of the traditional unix algorithm, uncomment below
# and change CRYPT_DEFAULT= to another algorithm. For example,
# CRYPT_DEFAULT=1 for BSD/Linux MD5.
#
#CRYPT_ALGORITHMS_DEPRECATE=__unix__

# The Solaris default is the traditional UNIX algorithm. This is not
# listed in crypt.conf(4) since it is internal to libc. The reserved
# name __unix__ is used to refer to it.
#
CRYPT_DEFAULT=__unix__

Pay special attention to the lines in bold above. These are the algorithms which the system uses to store your passwords which apparently also includes the deadly CRYPT_DEFAULT=__unix__ , which is nothing but crypt_unix. The other crypt algorithms that are allowed are CRYPT_ALGORITHMS_ALLOW=1,2a,md5. Further details of which you can find under /etc/security/crypt.conf

my-server # cat /etc/security/crypt.conf
#
# Copyright 2002 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "@(#)crypt.conf 1.1 02/06/19 SMI"
#
# The algorithm name __unix__ is reserved.

1 crypt_bsdmd5.so.1
2a crypt_bsdbf.so.1
md5 crypt_sunmd5.so.1

Let me explain you all of these a little more -

a) 1 - (crypt_bsdmd5.so): One-way password hashing module for use with crypt(3C) that uses the MD5 message hash algorithm. The output is compatible with md5crypt on BSD and Linux systems. Password Limit: 255 chars

b) 2a - (crypt_bsdbf.so): One-way password hashing module for use with crypt(3C) that uses the Blowfish cryptographic algorithm. Password Limit: 255 chars

c) md5 - (crypt_sunmd5.so): One-way password hashing module for use with crypt(3C) that uses the MD5 message hash algorithm. This module is designed to make it difficult to crack passwords that use brute force attacks based on high speed MD5 implementations that use code inlining, unrolled loops, and table lookup. Password Limit: 255 chars

So you have all of the above to choose from and to switch to better and more secure password scheme, do the following-

Edit the two lines in /etc/security/policy.conf from

#CRYPT_ALGORITHMS_DEPRECATE=__unix__
CRYPT_DEFAULT=__unix__

to-

(uncomment this line)
CRYPT_ALGORITHMS_DEPRECATE=__unix__

(change this line to your password scheme of choice)
CRYPT_DEFAULT=md5

You can also force move from one algorithm to another by editing the

CRYPT_ALGORITHMS_ALLOW=

line in policy.conf instead of the deprecation line.

NOTE - AFTER DOING THE CHANGE, MAKE SURE YOU CHANGE YOUR USER'S PASSWORD USING passwd COMMAND SO THAT GOING FORWARD, YOUR SYSTEM CAN SAVE PASSWORDS IN THE PASSWORD SCHEME OF YOUR CHOICE. IT CAN BE A HASSLE FOR YOU TO DO THIS, BUT THEN YOU CAN ALWAYS WRITE A SCRIPT TO AUTOMATE THIS TASK.

ANOTHER IMPORTANT NOTE - AFTER CHANGING YOUR PASSWORD SCHEME, SOME OF YOUR ADMIN APPS LIKE SOLARIS MANAGEMENT CONSOLE, WEBMIN OR WBEM AND MANY OTHERS THAT I MIGHT NOT BE AWARE OF, WILL NOT WORK. BUT SINCE I DONT USE THEM AT ALL. IT DOESNT REALLY BOTHER ME MUCH.

Thursday, July 10, 2008

Oracle 9i on a Solaris 9 Container using Soalris Zones

Hi Folks, I have successfully managed to installed and run Oracle 9i on this Solaris 9 Container that i just installed using Solaris Zones. I followed the standard install process and i didnt encounter anything unusual. The system maintains its own /etc/system file, all i did was added the below mentioned parameters in it, rebooted the zone and installed Oracle 9i on it. However i just created a sample database by the name of vishal on the system. I will work on this more and will post some feedback and how-to's here.

Info -

/etc/system -

set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
set semsys:seminfo_semmni=100
set semsys:seminfo_semmsl=410
set semsys:seminfo_semmns=1410
set semsys:seminfo_semopm=100
set semsys:seminfo_semvmx=32767

solaris9-zone # ps -ef | grep pmon
root 28570 28564 0 11:17:28 pts/2 0:00 grep pmon
oracle 22418 5325 0 Jun 26 ? 3:07 ora_pmon_vishal

solaris9-zone # uname -a
SunOS solaris9 5.9 Generic_Virtual sun4u sparc SUNW,Sun-Fire-280R

solaris9-zone # isainfo -kv
64-bit sparcv9 kernel modules

Feel free to comment, if you have anything to add.

Sunday, June 22, 2008

Step By Step Guide ~ How to Install Solaris 9 as a container on a Solaris 10 System Using Zones

Hello folks, I finally managed to successfully install a Solaris 9 Container Zone on my Solaris 10 System. I would like to share the relevant information here.

MY HARDWARE -

Sun Fire 280R (2 x Ultra SPARC III+ at 1200MHz, 6GB RAM, 2 x 80GB HDD)
5 network interfaces - eri0, ce0, ce1, ce2, ce3

though i used only eri0 and ce0. Remaining, i will use later as and when the need arises.

MY SOFTWARE -

Solaris 10 OS - Update 5 (downloaded from sun.com and burned this sol-10-u5-ga-sparc-dvd.iso on a DVDROM)

Solaris 9.0 Container Application - (downloaded from sun.com this file named - s9containers-1_0-rr-solaris10-sparc.tar.gz)

Solaris 9.0 OS Image file - (downloaded from sun.com this file named - solaris9-image.flar)

MY OBJECTIVE -

To install Solaris 9 OS as a container on a Solaris 10 zone. We have some native apps on Solaris 9, so i need to check if that would work fine.

STEPS I FOLLOWED -

STEP 1 - Solaris 10 OS Installation on the System

Installed Solaris 10 update 5 on my Sun Fire 280R. Chose Entire Distribution and allocated my second 80GB HDD only for storing Zone data. So i formatted and mounted /zones on my second hard disk c1t1d0s2. I dedicated my first disk (c1t0d0) to run Solaris 10 exclusively. Installation was successful without a hitch. Below is my install configuration -

hostname - sol10
ip address - 10.10.8.46/24 (on eri0 interface)

plumbed my ce0 interface so that i can dedicate this to my solaris 9 zone that i would be creating in the next step.

globalzone # cat /etc/release
Solaris 10 5/08 s10s_u5wos_10 SPARC
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 24 March 2008

globalzone #ifconfig ce0 plumb

STEP 2 - Install Solaris 9 Container Application 1.0

I uploaded the file s9containers-1_0-rr-solaris10-sparc.tar.gz to my home directory. Followed by which i gave the below commands to install the application.

globalzone # gunzip s9containers-1_0-rr-solaris10-sparc.tar.gz
globalzone # tar -xvf s9containers-1_0-rr-solaris10-sparc.tar
globalzone # cd ./s9containers-1_0-rr/Product
globalzone # pkgadd -d ./

The following packages are available:
1 SUNWs9brandk Solaris 9 Containers: solaris9 brand support RTU
(sparc) 11.10.0,REV=2008.04.24.03.37
2 SUNWs9brandr Solaris 9 Containers: solaris9 brand support (Root)
(sparc) 11.10.0,REV=2008.04.24.03.37
3 SUNWs9brandu Solaris 9 Containers: solaris9 brand support (Usr)
(sparc) 11.10.0,REV=2008.04.24.03.37

Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]:

(select all and accept all the default parameters. the installation was successful.)

STEP 3 - Create Solaris 9 branded zone

After the system booted, i followed the below commands to create a branded solaris 9 zone.

globalzone # zonecfg -z solaris9
solaris9: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:solaris9> create -b
zonecfg:solaris9> set brand=solaris9
zonecfg:solaris9> set autoboot=false
zonecfg:solaris9> set zonepath=/zones/solaris9
zonecfg:solaris9> add net
zonecfg:solaris9:net> set physical=ce0
zonecfg:solaris9:net> set address=10.10.8.91/24
zonecfg:solaris9:net> end
zonecfg:solaris9> info
zonename: solaris9
zonepath: /zones/solaris9
brand: solaris9
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
net:
address: 10.10.8.91/24
physical: ce0
zonecfg:solaris9> verify
zonecfg:solaris9> commit
zonecfg:solaris9> exit

STEP 4 - Installed Solaris 9 on the Branded Zone

I uploaded the file solaris9-image.flar to my home directory and performed the below commands to Install solaris 9 zone.

globalzone # zoneadm -z solaris9 install -u -a /export/home/vishal/solaris9-image.flar
Log File: /var/tmp/solaris9.install.846.log
Source: /export/home/vishal/solaris9-image.flar
Installing: This may take several minutes...


Postprocessing: This may take several minutes...

Result: Installation completed successfully.
Log File: /zones/solaris9/root/var/log/solaris9.install.846.log
globalzone #
globalzone #
globalzone # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- solaris9 installed /zones/solaris9 solaris9 shared

globalzone # cat /var/tmp/solaris9.install.846.log
[Mon Jun 23 12:09:03 SGT 2008] Log File: /var/tmp/solaris9.install.846.log
[Mon Jun 23 12:09:03 SGT 2008] Product: Solaris 9 Containers 1.0
[Mon Jun 23 12:09:03 SGT 2008] Installer: solaris9 brand installer 1.21
[Mon Jun 23 12:09:03 SGT 2008] Zone: solaris9
[Mon Jun 23 12:09:03 SGT 2008] Path: /zones/solaris9
[Mon Jun 23 12:09:03 SGT 2008] Starting pre-installation tasks.
[Mon Jun 23 12:09:03 SGT 2008] Installation started for zone "solaris9"
[Mon Jun 23 12:09:03 SGT 2008] Source: /export/home/vishal/solaris9-image.flar
[Mon Jun 23 12:09:03 SGT 2008] Media Type: flash archive
[Mon Jun 23 12:09:03 SGT 2008] Installing: This may take several minutes...
[Mon Jun 23 12:09:03 SGT 2008] cd /zones/solaris9/root &&
[Mon Jun 23 12:09:03 SGT 2008] do_flar < "/export/home/vishal/solaris9-image.flar"
[Mon Jun 23 12:13:58 SGT 2008] Sanity Check: Passed. Looks like a Solaris 9 system.
[Mon Jun 23 12:13:58 SGT 2008] Postprocessing: This may take several minutes...
[Mon Jun 23 12:13:58 SGT 2008] running: p2v -u solaris9
[Mon Jun 23 12:13:58 SGT 2008] Postprocess: Gathering information about zone solaris9
[Mon Jun 23 12:13:58 SGT 2008] Postprocess: Creating mount points
[Mon Jun 23 12:13:58 SGT 2008] Postprocess: Processing /etc/system
[Mon Jun 23 12:13:58 SGT 2008] Postprocess: Booting zone to single user mode
[Mon Jun 23 12:14:11 SGT 2008] Postprocess: Applying p2v module S20_apply_patches
[Sun Jun 22 21:14:12 PDT 2008] S20_apply_patches: Unpacking patch: 115986-03
[Sun Jun 22 21:14:12 PDT 2008] S20_apply_patches: Installing patch: 115986-03

Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch number 115986-03 has been successfully installed.
See /var/sadm/patch/115986-03/log for details

Patch packages installed:
SUNWesu
SUNWesxu

[Sun Jun 22 21:14:29 PDT 2008] S20_apply_patches: Unpacking patch: 112963-32
[Sun Jun 22 21:14:29 PDT 2008] S20_apply_patches: Installing patch: 112963-32

Checking installed patches...
Patch 112963-32 has already been applied.
See patchadd(1M) for instructions.

Patchadd is terminating.
[Mon Jun 23 12:14:33 SGT 2008] Postprocess: Applying p2v module S31_fix_net
[Mon Jun 23 12:14:33 SGT 2008] Postprocess: Applying p2v module S32_fix_nfs
[Mon Jun 23 12:14:34 SGT 2008] Postprocess: Applying p2v module S33_fix_vfstab
[Mon Jun 23 12:14:34 SGT 2008] Postprocess: Applying p2v module S34_fix_inittab
[Mon Jun 23 12:14:34 SGT 2008] Postprocess: Applying p2v module S35_fix_crontab
[Mon Jun 23 12:14:34 SGT 2008] Postprocess: Applying p2v module S36_fix_pam_conf
[Mon Jun 23 12:14:34 SGT 2008] Postprocess: Applying p2v module S40_setup_preload
[Mon Jun 23 12:14:35 SGT 2008] Postprocess: Performing zone sys-unconfig
[Mon Jun 23 12:15:00 SGT 2008] Postprocess: Postprocessing successful.
[Mon Jun 23 12:15:00 SGT 2008] Result: Postprocessing complete.
[Mon Jun 23 12:15:01 SGT 2008] Service Tag: Gathering information about zone solaris9
[Mon Jun 23 12:15:01 SGT 2008] Service Tag: Adding service tag: urn:st:f703f244-18f1-cf25-a9db-fdd4ea20ffe6
Solaris 9 Containers 1.0 added
Product instance URN=urn:st:f703f244-18f1-cf25-a9db-fdd4ea20ffe6
[Mon Jun 23 12:15:01 SGT 2008] Service Tag: Operation successful.
[Mon Jun 23 12:15:01 SGT 2008]
[Mon Jun 23 12:15:01 SGT 2008] Result: Installation completed successfully.
[Mon Jun 23 12:15:01 SGT 2008] Log File: /zones/solaris9/root/var/log/solaris9.install.846.log
globalzone #

STEP 5 - Configuring the Solaris 9 zone

Configuration process involves booting your zone, setting up your hostname, IP Address Configuration, TimeZone Settings, Naming Configuration etc.

globalzone # zoneadm -z solaris9 boot

globalzone # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
3 solaris9 running /zones/solaris9 solaris9 shared
globalzone # zlogin -C solaris9

[zlogin solaris console]

After this you will be asked to set up your hostname, IP Address, TimeZone Settings, Naming Configuration etc. After successfully completing this, the zone would reboot. After the system has rebooted, you are all set to go.

globalzone # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
3 solaris9 running /zones/solaris9 solaris9 shared
globalzone #


SOME QUICK TESTS I PERFORMED

After all this, i was able to connect to my Solaris 9 container that i just created using SSH. Just to test out, i performed the following commands -

solaris9-zone # uname -a
SunOS solaris9 5.9 Generic_Virtual sun4u sparc SUNW,Sun-Fire-280R

solaris9-zone # cat /etc/release
Solaris 9 9/05 HW s9s_u9wos_06b SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 25 September 2006
solaris9-zone # df -h
Filesystem size used avail capacity Mounted on
/ 67G 1.9G 65G 3% /
/.SUNWnative/lib 9.6G 162M 9.4G 2% /.SUNWnative/lib
/.SUNWnative/platform
9.6G 162M 9.4G 2% /.SUNWnative/platform
/.SUNWnative/usr 29G 3.1G 25G 11% /.SUNWnative/usr
/dev 67G 1.9G 65G 3% /dev
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 11G 16K 11G 1% /etc/svc/volatile
/dev/ksyms 29G 3.1G 25G 11% /dev/ksyms
fd 0K 0K 0K 0% /dev/fd
swap 11G 0K 11G 0% /tmp

solaris9-zone # psrinfo
0 on-line since 06/23/2008 11:52:30
1 on-line since 06/23/2008 11:52:31
solaris9-zone #

I hope the above would be useful to you guys. Do feel free to comment, if you happen to have any questions.

Saturday, June 21, 2008

Installing Solaris 9 Zone on a Solaris 10 System

Hello Folks,

I am now working on doing a Solaris 9 OS install on a Solaris 10 system using Solaris 10 Zones. If you dont know already know, Solaris 10 can run container versions of Solaris 8 and 9 within a Solaris 9. There are some specific applications that you need to install on your global zone to make it work. From what all i have read so far, all that is needed is a Solaris 9.0 Container Software 1.0 and Solaris 9 install image. You can download both from the Sun Download website. Also make sure that the version of Solaris ten is Update 4 or above. I will be illustrating the procedure with Update 5 release. Stay tuned....

Wednesday, June 18, 2008

Step By Step Guide ~ How to Change Zone Network Parameters

I would presume here that the zone is already created and i would detail out the process to change an "existing" zone's network parameters.

Please Note - The network parameters can be changed without halting the zone. But the changes would only take affect after the zone is rebooted. So be careful with this part. Before you can use any network interface on a local zone, that interface must be plumbed first (e.g. ifconfig plumb, on the global zone) in the global zone. If no network address is assigned on the global zone to that interface, its default address will be set to inet 0.0.0.0 netmask 0.

Objective - To change network properties of zone "ora9"

FROM -
Interface - ce0
IP Address - 10.10.10.8/24

TO -
Interface - ce1
IP Address - 10.10.10.11/24


STEP 1 - Use zoneadm list on the global zone to show status of zones on your system.

On the global zone, use the zoneadm list -cv to show current status of all installed zones.

In the illustration below, i have two zones installed, one being ora9 and another being ora8

globalzone# zoneadm list -cv
ID NAME STATUS PATH
0 global running /
1 ora9 running /zone/ora9
- ora8 configured /zone/ora8
globalzone#

Note: You may also use zoneadm -z list -v to verify the specific zone status.

STEP 2 - Use zonecfg -z to enter the zone modifying environment

On the global zone, use the zonecfg -z to enter the zone configuration environment. The enviromental prompt "zonecfg:" will display.

Use info in the zone configuration environment to verify the network values.

globalzone# zonecfg -z ora9
zonecfg:ora9> info
zonepath: /zone/ora9
autoboot: true
pool:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
net:
address: 10.10.10.8/24
physical: ce0
zonecfg:ora9>

STEP 3 - Use set address= and set physical= to change the network address and physical interface.

Use set address= and set physical= in the zone configuration environment .

zonecfg:ora9:net> set address=10.10.10.11/24
zonecfg:ora9:net> set physical=ce1
zonecfg:ora9:net> info
net:
address: 10.10.10.11/24
physical: ce1
zonecfg:ora9:net> end
zonecfg:ora9> verify
zonecfg:ora9> info
zonepath: /zone/ora9
autoboot: true
pool:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
net:
address: 10.10.10.8/24
physical: ce0
net:
address: 10.10.10.11/24
physical: ce1
zonecfg:ora9> remove net address=10.10.10.8/24
zonecfg:ora9> info
zonepath: /zone/ora9
autoboot: true
pool:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
net:
address: 10.10.10.11/24
physical: ce1
zonecfg:ora9> commit
zonecfg:ora9>

The zone does not require a reboot for this parameter to take place as the zone will use the new value if it is halted.

Note:
* If you set the autoboot resource property in a zone’s configuration to true, that zone is automatically booted when the global zone is booted. The default setting is false.
* for the zones to autoboot, the zones service svc:/system/zones:default must also be enabled.

STEP 4 - Use remove net address= to remove the old network values.

Use exit in the environment to save the changes and leave the zone configuration environment.

zonecfg:ora9> info
zonepath: /zone/ora9
autoboot: true
pool:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
net:
address: 10.10.10.8/24
physical: ce0
net:
address: 10.10.10.11/24
physical: ce1
zonecfg:ora9> remove net address=10.10.10.8/24
zonecfg:ora9> info
zonepath: /zone/ora9
autoboot: true
pool:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
net:
address: 10.10.10.11/24
physical: ce1
zonecfg:ora9> commit
zonecfg:ora9>

Note:

Performing either remove net address=
or
remove net physical=
will delete both network parameters. You do not nor can you perform both command after you have issue one of them.

STEP 5 - Use commit and exit to save the changes to the parameter

Use commit and exit in the environment to save the changes and leave the zone configuration environment.

zonecfg:ora9> verify
zonecfg:ora9> commit
zonecfg:ora9> exit
globalzone#

Use zoneadm -z halt followed by zoneadm -z boot

The new network parameters will not come into force until the zone is booted. Use zoneadm -z halt to halt the zone. Then use zoneadm -z boot to start the zone with the new network parameters.

globalzone# ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849 mtu 8232 index 1
zone ora9
inet 127.0.0.1 netmask ff000000
ce0: flags=1000843 mtu 1500 index 2
inet 10.10.10.14 netmask ffffff00 broadcast 10.10.10.255
ether [removed]
ce0:1: flags=1000843 mtu 1500 index 2
zone ora9
inet 10.10.10.8 netmask ffffff00 broadcast 10.10.10.255
ce1: flags=1000843 mtu 1500 index 3
inet 10.10.10.15 netmask ffffff00 broadcast 10.10.10.255
ether [removed]

globalzone# zoneadm -z ora9 halt

globalzone# ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
ce0: flags=1000843 mtu 1500 index 2
inet 10.10.10.14 netmask ffffff00 broadcast 10.10.10.255
ether [removed]
ce1: flags=1000843 mtu 1500 index 3
inet 10.10.10.15 netmask ffffff00 broadcast 10.10.10.255
ether [removed]

globalzone# zoneadm -z ora9 boot

globalzone# ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849 mtu 8232 index 1
zone ora9
inet 127.0.0.1 netmask ff000000
ce0: flags=1000843 mtu 1500 index 2
inet 10.10.10.14 netmask ffffff00 broadcast 10.10.10.255
ether [removed]
ce1: flags=1000843 mtu 1500 index 3
inet 10.10.10.15 netmask ffffff00 broadcast 10.10.10.255
ether [removed]
ce1:1: flags=1000843 mtu 1500 index 3
zone ora9
inet 10.10.10.11 netmask ffffff00 broadcast 10.10.10.255

globalzone#

Thursday, May 22, 2008

Solaris - Crontab

Features:
1. Permits scheduling of scripts(shell/perl/python/ruby/PHP/etc.)/tasks on a per-user basis via individual cron tables.
2. Permits recurring execution of tasks
3. Permits one-time execution of tasks via 'at'
4. Logs results(exit status but can be full output) of executed tasks
5. Facilitates restrictions/permissions via - cron.deny,cron.allow,at.*

Directory Layout for Cron daemon:
/var/spool/cron - and sub-directories of to store cron & at entries
/var/spool/cron/atjobs - houses one-off, atjobs
- 787546321.a - corresponds to a user's atjob

/var/spool/cron/crontabs - houses recurring jobs for users
- username - these files house recurring tasks for each user


Cron command:
crontab - facilitates the management of cron table files
-crontab -l - lists the cron table for current user -
- reads /var/spool/cron/crontabs/root

Cron file format

m(0-59) h(0-23) dom(1-31) m(1-12) dow(0-6) command
10 3 * * * /usr/sbin/logadm - 3:10AM - every day
15 3 * * 0 /usr/lib/fs/nfs/nfsfind - 3:15 - every Sunday
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
1 2 * * * [ -x /usr/sbin/rtc ] && /usr/sbin/rtc -c > /dev/null 2>&1

m(0-59) h(0-23) dom(1-31) m(1-12) dow(0-6) command
Note: (date/time/command) MUST be on 1 line
m = minute(0-59)
h = hour(0-23)
dom = day of the month(1-31)
m = month(1-12)
dow = day of the week(0-6) - 0=Sunday

Note: each line contains 6 fields/columns - 5 pertain to date & time of execution, and the 6th pertains to command to execute

#m h dom m dow
10 3 * * * /usr/sbin/logadm - 3:10AM - every day
* * * * * /usr/sbin/logadm - every minute,hour,dom,m,dow
*/5 * * * * /usr/sbin/logadm - every 5 minutes(0,5,10,15...)
1 0-4 * * * /usr/sbin/logadm - 1 minute after the hours 0-4
0 0,2,4,6,9 * * * /usr/sbin/logadm - top of the hours 0,2,4,6,9

1-9 0,2,4,6,9 * * * /usr/sbin/logadm - 1-9 minutes of hours 0,2,4,6,9

Note: Separate columns/fields using whitespace or tabs

###Create crontabs for root ###
Note: ALWAYS test commands prior to crontab/at submission

11 * * * * script.sh -va >> /reports/`date +%F`.script.report

Note: set EDITOR variable to desired editor
export EDITOR=vim

###script.sh ###
#!/usr/bin/bash
HOME=/export/home/vishal
df -h >> $HOME/`date +%F`.script.report
#END

Note: aim to reference scripts(shell/perl/python/ruby/PHP,etc.) instead of the various characters

Note:
Default Solaris install creates 'at.deny' & 'cron.deny'
You MUST not be included in either file to be able to submit at & cron entries

Conversely, if cron.allow and at.allow files exist, you MUST belong to either file to submit at or cron entries

NETSTAT Usage in Solaris

Lists connections for ALL protocols & address families to and from machine
Address Families (AF) include:
INET - ipv4
INET6 - ipv6
UNIX - Unix Domain Sockets(Solaris/FreeBSD/Linux/etc.)

Protocols Supported in INET/INET6 include:
TCP, IP, ICMP(PING(echo/echo-reply)), IGMP, RAWIP, UDP(DHCP,TFTP,etc.)

Lists routing table
Lists DHCP status for various interfaces
Lists net-to-media table - network to MAC(network card) table


NETSTAT USAGE:

netstat - returns sockets by protocol using /etc/services for lookup

/etc/nssswitch.conf is consulted by netstat to resolve names for IPs

netstat -a - returns ALL protocols for ALL address families (TCP/UDP/UNIX)

netstat -an - -n option disables name resolution of hosts & ports

netstat -i - returns the state of interfaces. pay attention to errors/collisions/queue columns when troubleshooting performance

netstat -m - returns streams(TCP) statistics

netstat -p - returns net-to-media info (MAC/layer-2 info.) i.e. arp

netstat -P protocol (ip|ipv6|icmp|icmpv6|tcp|udp|rawip|raw|igmp) - returns active sockets for selected protocol

netstat -r - returns routing table

netstat -D - returns DHCP configuration (lease duration/renewal/etc.)

netstat -an -f address_family

netstat -an -f inet|inet6|unix

netstat -an -f inet - returns ipv4 only information

netstat -n -f inet

netstat -anf inet -P tcp

netstat -anf inet -P udp

State Database Replicas - Introduction

Note: At least 3 replicas are required for a consistent, functional, multi-user Solaris system.

3 - yields at least 2 replicas in the event of a failure
Note: if replicas are on same slice or media and are lost, then Volume Management will fail, causing loss of data.
Note: place replicas on as many distinct controllers/disks as possible

Note: Max of 50 replicas per disk set

Note: Volume Management relies upon Majority Consensu Algorithm (MCA) to determine the consistency of the volume information

3 replicas = 1.5(half) = 1-rounded-down +1 = 2 = MCA(half +1)

Note: try to create an even amount of replicas
4 replicas = 2(half) + 1 = 3

State database replica is approximately 4MB by default - for local storage

Rules regarding storage location of state database replicas:
1. dedicated partition/slice - c0t1d0s3
2. local partition that is to be used in a volume(RAID 0/1/5)
3. UFS logging devices
4. '/', '/usr', 'swap', and other UFS partitions CANNOT be used to store state database replicas

Solaris Volume Management - Introduction

Solaris' Volume Management permits the creation of 5 object types:
1. Volumes(RAID 0(concatenation or stripe)/1(mirroring)/5(striping with parity)
2. Soft partitions - permits the creation of very large storage devices
3. Hot spare pools - facilitates provisioning of spare storage for use when RAID-1/5 volume has failed
i.e. MIRROR
-DISK1
-DISK2
-DISK3 - spare

4. State database replica - MUST be created prior to volumes
- Contains configuration & status of ALL managed objects (volumes/hot spare pools/Soft partitions/etc.)

5. Disk sets - used when clustering Solaris in failover mode

Note: Volume Management facilitates the creation of virtual disks
Note: Virtual disks are accessible via: /dev/md/dsk & /dev/md/rdsk
Rules regarding Volumes:
1. State database replicas are required
2. Volumes can be created using dedicated slices
3. Volumes can be created on slices with state database replicas
4. Volumes created by volume manager CANNOT be managed using 'format', however, can be managed using CLI-tools (metadb, metainit) and GUI tool (SMC)
5. You may use tools such as 'mkfs', 'newfs', 'growfs'
6. You may grow volumes using 'growfs'

Creating a Swap File/Partition in Solaris

swap -l | -s - to display swap information

mkfile size location_of_file - to create swap file
mkfile 512m /data2/swap2

swap -a /data2/swap2 - activates swap file

To remove swap file:
swap -d /data2/swap2 - removes swap space from kernel. does NOT remove file
rm -rf /data2/swap2

###Swap Partition Creation###
format - select disk - partition - select slice/modify
swap -a /dev/dsk/c0t2d0s1

Modify /etc/vfstab

Implementing a Temporary File System (TEMPFS) in Solaris

TempFS provides in-memory (RAM), very fast, storage and boosts application performance

Steps:
1. Determine available memory and the amount you can spare for TEMPFS
-prtconf
- allocate 100MB
2. Execute mount command:

mkdir /tempdata && chmod 777 /tempdata && mount -F tmpfs -osize=100m swap /tempdata

Note: TEMPFS data does NOT persist/survive across reboots
Note: TEMPFS data is lost when the following occurs:
1. TEMPFS mount point is unmounted: i.e. umount /tempdata
2. System reboot

Modify /etc/vfstab to include the TEMPFS mount point for reboots

swap - /tempdata tmpfs - yes -

How to determine file system associated with device in Solaris

1. fstyp /dev/dsk/c0t0d0s0 - returns file system type
2. grep mount point from /etc/vfstab - returns matching line
grep /var /etc/vfstab
3. cat /etc/mnttab - displays currently mounted file system

Steps to partition and create file systems on a Solaris Disk

1. unmount existing file systems
-umount /data2 /data3

2. confirm fdisk partitions via 'format' utility
-format - select disk - select fdisk

3. use partition - modify to create slices on desired drives
DISK1
-slice 0 - /dev/dsk/c0t1d0s0
DISK2
-slice 0 - /dev/dsk/c0t2d0s0

4. Create file system using 'newfs /dev/rdsk/c0t0d0s0'

5. Use 'fsck /dev/rdsk/c0t1d0s0' to verify the consistency of the file system

6. Mount file systems at various mount points
mount /dev/dsk/c0t1d0s0 /data2 && mount /dev/dsk/c0t2d0s0 /data3

7. create entries in Virtual File System Table (/etc/vfstab) file

Wednesday, May 21, 2008

NAS and SAN - A Comparison (for newbies)



At first glance NAS and SAN might seem almost identical, and in fact many times either will work in a given situation. After all, both NAS and SAN generally use RAID connected to a network, which then are backed up onto tape. However, there are differences -- important differences -- that can seriously affect the way your data is utilized. For a quick introduction to the technology, take a look at the diagrams below.

Wires and Protocols
Most people focus on the wires, but the difference in protocols is actually the most important factor. For instance, one common argument is that SCSI is faster than ethernet and is therefore better. Why? Mainly, people will say the TCP/IP overhead cuts the efficiency of data transfer. So a Gigabit Ethernet gives you throughputs of 60-80 Mbps rather than 100Mbps.

But consider this: the next version of SCSI (due date ??) will double the speed; the next version of ethernet (available in beta now) will multiply the speed by a factor of 10. Which will be faster? Even with overhead? It's something to consider.

The Wires
--NAS uses TCP/IP Networks: Ethernet, FDDI, ATM (perhaps TCP/IP over Fibre Channel someday)
--SAN uses Fibre Channel

The Protocols
--NAS uses TCP/IP and NFS/CIFS/HTTP
--SAN uses Encapsulated SCSI

More Differences

NAS - Almost any machine that can connect to the LAN (or is interconnected to the LAN through a WAN) can use NFS, CIFS or HTTP protocol to connect to a NAS and share files.
SAN - Only server class devices with SCSI Fibre Channel can connect to the SAN. The Fibre Channel of the SAN has a limit of around 10km at best

NAS - A NAS identifies data by file name and byte offsets, transfers file data or file meta-data (file's owner, permissions, creation data, etc.), and handles security, user authentication, file locking
SAN - A SAN addresses data by disk block number and transfers raw disk blocks.

NAS - A NAS allows greater sharing of information especially between disparate operating systems such as Unix and NT.
SAN - File Sharing is operating system dependent and does not exist in many operating systems.

NAS - File System managed by NAS head unit
SAN - File System managed by servers

NAS - Backups and mirrors (utilizing features like NetApp's Snapshots) are done on files, not blocks, for a savings in bandwidth and time. A Snapshot can be tiny compared to its source volume.
SAN - Backups and mirrors require a block by block copy, even if blocks are empty. A mirror machine must be equal to or greater in capacity compared to the source volume.

What's Next?
NAS and SAN will continue to butt heads for the next few months or years, but as time goes on, the boundaries between NAS and SAN are expected to blur, with developments like SCSI over IP and Open Storage Networking (OSN), the latter recently announced at Networld Interop. Under the OSN initiative, many vendors such as Amdahl, Network Appliance, Cisco, Foundry, Veritas, and Legato are working to combine the best of NAS and SAN into one coherent data management solution.

SAN / NAS Convergence
As Internet technologies like TCP/IP and Ethernet have proliferated worldwide, some SAN products are making the transition from Fibre Channel to the same IP-based approach NAS uses. Also, with the rapid improvements in disk storage technology, today's NAS devices now offer capacities and performance that once were only possible with SAN. These two industry factors have led to a partial convergence of NAS and SAN approaches to network storage.

Tips on Veritas Volume Manager (VxVM)

Important Notes for Installing VERITAS Volume Manager (VxVM)

* Check what VERITAS packages are currently running:
# pkginfo | grep –i VRTS

* Make sure the boot disk has at least two free partitions with 2048 contiguous sectors (512 bytes) aviable.
# prtvtoc /dev/rdsk/c0t0d0

* Make sure to save the boot disk information by using the “prtvtoc” command.
# prtvtoc /dev/rdsk/c0t0d0 > /etc/my_boot_disk_information

* Make sure to have a backup copy of the /etc/system and /etc/vfstab files.
* Add packages to your system.
# cd 2location_of_your_packages
# pkgadd –d . VRTSvxvm VRTSvmman VRTSvmdoc

* Add the license key by using vxlicinst.
# vxlicinst

* Then run the Volume Manager Installation program.
# vxinstall

* Check the .profile file to ensure the following paths:
# PATH=$PATH:/usr/lib/vxvm/bin:/opt/VRTSobgui/bin:/usr/sbin:/opt/VRTSob/bin
# MANPATH=$MANPATH:/opt/VRTS/man
# export PATH MANPATH

The VERITAS Enterprise Administrator (VEA) provides a Java-based graphical user interface for managing Veritas Volume Manager (VxVM).

Important Notes for how to set up VEA:

* Install the VEA software.
# cd 2location_of_your_packages
# pkgadd –a ../scripts/VRTSobadmin –d . VRTSob VRTSobgui VRTSvmpro VRTSfspro

* Start the VEA server if not, running.
# vxsvc –m (Check or monitor the VEA server is running)
# vxsvc (Start the VEA server)

* Start the Volume Manager User interface.
# vea &

The Most handy Volume Manager commands:

* # vxdiskadm
* # vxdctl enable (Force the VxVM configuration to rescan for the disks. See devfsadm)
* # vxassist (Assist to create a VxVM volume.)
* # vxdisk list rootdisk (Displays information about the header contents of the root disk.)
* # vxdg list rootdg (Displays information about the content of the rootdg disk group.)
* # vxprint –g rootdg –thf | more (Displays information about volumes in rootdg.)

In order to create VERITAS Volume Manager, you may use the following three methods:

(This article emphases on the CLI method.)

* VEA
* Command Line Interface (CLI)
* vxdiskadm

Steps to create a disk group:
* # vxdg init accountingdg disk01=c1t12d0

Steps to add a disk to a disk group:

* View the status of the disk.
# vxdisk list --or-- # vxdisk –s list

* Add one un-initialized disk to the free disk pool.
# vxdisksetup –i c1t8d0

* Add the disk to a disk group called accoutingdg.
# vxdg init accountingdg disk01=c1t8d0
# vxdg –g accountingdg adddisk disk02=c2t8d0

Steps to split objects between disk groups:
* # vxdg split sourcedg targetdg object …

Steps to join disk groups:

* # vxdg join sourcedg targetdg

Steps to remove a disk from a disk group:

* Remove the “disk01” disk from the “accountingdg” diskgroup.
# vxdg –g accountingdg rmdisk=disk01

Steps to remove a device from the free disk pool:

* Remove the c1t8d0 device from the free disk pool.
# vxdiskunsetup c2t8d0

Steps to manage disk group:

* To deport and import the “accountingdg” disk group.
# vxdg deport accountingdg
# vxdg –C import accountingdg
# vxdg –h other_hostname deport accountingdg

* To destroy the “accountingdg” disk group.
# vxdg destroy accountingdg

Steps to create a VOLUME:

* # vxassist –g accountingdg make payroll_vol 500m
* # vxassist –g accountingdg make gl_vol 1500m

Steps to mount a VOLUME:

If using ufs:

* # newfs /dev/vx/rdsk/accountingdg/payroll_vol
* # mkdir /payroll
* # mount –F ufs /dev/vx/dsk/accountingdg/payroll_vol /payroll

If using VxFS:

* # mkfs –f vxfs /dev/vx/rdsk/accountingdg/payroll_vol
* # mkdir /payroll
* # mount –F vxfs /dev/vx/dsk/accountingdg/payroll_vol /payroll

Steps to resize a VOLUME:

* # vxresize –g accountingdg payroll_vol 700m

Steps to remove a VOLUME:
* # vxedit –g accountingdg –rf rm payroll_vol

Steps to create a two striped and a mirror VOLUME:
* # vxassist –g accounting make ac_vol 500m layout=stripe,mirror

Steps to create a raid5 VOLUME:
* # vxassist –g accounting make ac_vol 500m layout=raid5 ncol=5 disk01 …

Display the VOLUME layout:
* # vxprint –rth

Add or remove a mirror to an existing VOLUME:
* # vxassist –g accountingdg mirror payroll_vol
* # vxplex –g accounitngdg –o rm dis payroll_plex01

Add a dirty region log to an existing VOLUME and specify the disk to use for the drl:
* # vxassist –g accountingdg addlog payroll_vol logtype=drl disk04

Move an existing VOLUME from its disk group to another disk group:
* # vxdg move accountingdg new_accountingdg payroll_vol

To start a VOLUME:
* #vxvol start

Steps to encapsulate and Root Disk Mirroring
* Use “vxdiskadm” to place another disk in rootdg with the same size or greater.
* Set the eeprom variable to enable VxVM to create a device alias in the openboot program.

# eeprom use-nvramrc?=true

* Use “vxdiskadm” to mirror the root volumes. (Option 6)
* Test you can reboot from mirror disk.

# vxmend off rootvol-01 (disable the boot disk)
# init 6
OK> devalias (check available boot disk aliases)
OK> boot vx-disk01

Write a script to use the “for” statement to do some work.

# for i in 0 1 2 3 4
>do
>cp –r /usr/sbin /mydir${i}
>mkfile 5m /mydir${i}
>dd if=/mydir/my_input_file of=/myother_dir/my_output_file &
>done

Tuesday, May 20, 2008

Veritas Volume Manager - Quick Start Command Reference

Setting Up Your File System

Make a VxFS file system - mkfs –F vxfs [generic_options] [-o vxfs_options] char_device [size]
Mount a file system - mount –F vxfs [generic_options] [-o vxfs_options] block_device mount_point
Unmount a file system - umount mount_point
Determine file system type - fstype [-v] block_device
Report free blocks/inodes - df –F vxfs [generic_options] [-o s] mount_point
Check/repair a file system - fsck –F vxfs [generic_options] [y|Y] [n|N] character_device

Online Administration

Resize a file system - fasdm [-b newsize] [-r raw_device] mount_point
Dump a file system - vxdump [options] mount_point
Restore a file system - vxrestore [options] mount_point
Create a snapshot file system - mount –F vxfs –o snapof=source_block_device,[snapsize=size] destination_block_device snap_mount_point
Create a storage checkpoint - fsckptadm [-nruv] create ckpt_name mount_point
List storage checkpoints - fsckptadm [-clv] list mount_point
Remove a checkpoint - fsckptadm [-sv] remove ckpt_name mount_point
Mount a checkpoint - mount –F vxfs –o ckpt=ckpt_name pseudo_device mount_point
Unmount a checkpoint - umount mount_point
Change checkpoint attributes - fsckptadm [-sv] set [nodata|nomount|remove] ckpt_name
Upgrade the VxFS layout - vxupgrade [-n new_version] [-r raw_device] mount_point
Display layout version - vxupgrade mount_point

Defragmenting a file system

Report on directory fragmentation - fsadm –D mount_point
Report on extent fragmentation - fsadm –E [-l largesize] mount_point
Defragment directories - fsadm –d mount_point
Defragment extents - fsadm –e mount_point
Reorganize a file system to support files > 2GB - fsadm –o largefiles mount_point

Intent Logging, I/O Types, and Cache Advisories

Change default logging behavior - fsck –F vxfs [generic_options] –o delaylog|tmplog|nodatainlog|blkclear block_device mount_point
Change how VxFS handles buffered I/O operations - mount –F vxfs [generic_options] –o mincache=closesync|direct|dsync|unbuffered| tmpcache block_device mount_point
Change how VxFS handles I/O requests for files opened with O_SYNC and O_DSYNC - mount –F vxfs [generic_options] –o convosync=closesync|direct|dsync|unbuffered |delay block_device mount_point

Quick I/O

Enable Quick I/O at mount - mount –F vxfs –o qio mount_point
Disable Quick I/O - mount –F vxfs –o noqio mount_point
Treat a file as a raw character device - filename::cdev:vxfs:
Create a Quick I/O file through a symbolic link - qiomkfile [-h header_size] [-a] [-s size] [-e|-r size] file
Get Quick I/O statistics - qiostat [-i interval][-c count] [-l] [-r] file
Enable cached QIO for all files in a file system - vxtunefs –s –o qio_cache_enable=1 mnt_point
Disable cached QIO for a file - qioadmin –S filename=OFF mount_point

Mirroring Disk With Solaris Disksuite (formerly Solstice)

The first step to setting up mirroring using DiskSuite is to install the DiskSuite packages and any necessary patches for systems prior to Solaris 9. SVM is part of the base system in Solaris 9. The latest recommended version of DiskSuite is 4.2 for systems running Solaris 2.6 and Solaris 7, and 4.2.1 for Solaris 8. There are currently three packages and one patch necessary to install DiskSuite 4.2. They are:

SUNWmd (Required)
SUNWmdg (Optional GUI)
SUNWmdn (Optional SNMP log daemon)
106627-19 (obtain latest revision)

The packages should be installed in the same order as listed above. Note that a reboot is necessary after the install as new drivers will be added to the Solaris kernel. For DiskSuite 4.2.1, install the following packages:

SUNWmdu (Commands)
SUNWmdr (Drivers)
SUNWmdx (64-Bit Drivers)
SUNWmdg (Optional GUI)
SUNWmdnr (Optional log daemon configs)
SUNWmdnu (Optional log daemon)

For Solaris 2.6 and 7, to make life easier, be sure to update your PATH and MANPATH variables to add DiskSuite's directories. Executables reside in /usr/opt/SUNWmd/sbin and man pages in /usr/opt/SUNWmd/man. In Solaris 8, DiskSuite files were moved to "normal" system locations (/usr/sbin) so path updates are not necessary.

The Environment
In this example we will be mirroring two disks, both on the same controller. The first disk will be the primary disk and the second will be the mirror. The disks are:

Disk 1: c0t0d0
Disk 2: c0t1d0

The partitions on the disks are presented below. There are a few items of note here. Each disk is partitioned exactly the same. This is necessary to properly implement the mirrors. Slice 2, commonly referred to as the 'backup' slice, which represents the entire disk must not be mirrored. There are situations where slice 2 is used as a normal slice, however, this author would not recommend doing so. The three unassigned partitions on each disk are configured to each be 10MB. These 10MB slices will hold the DiskSuite State Database Replicas, or metadbs. More information on the state database replicas will be presented below. In DiskSuite 4.2 and 4.2.1, a metadb only occupies 1034 blocks (517KB) of space. In SVM, they occupy 8192 blocks (4MB). This can lead to many problems during an upgrade if the slices used for the metadb replicas are not large enough to support the new larger databases.

Disk 1:
c0t0d0s0: /
c0t0d0s1: swap
c0t0d0s2: backup
c0t0d0s3: unassigned
c0t0d0s4: /var
c0t0d0s5: unassigned
c0t0d0s6: unassigned
c0t0d0s7: /export

Disk 2:
c0t1d0s0: /
c0t1d0s1: swap
c0t1d0s2: backup
c0t1d0s3: unassigned
c0t1d0s4: /var
c0t1d0s5: unassigned
c0t1d0s6: unassigned
c0t1d0s7: /export

The Database State Replicas

The database state replicas serve a very important function in DiskSuite. They are the repositories of information on the state and configuration of each metadevice (A logical device created through DiskSuite is known as a metadevice). Having multiple replicas is critical to the proper operation of DiskSuite.

· There must be a minimum of three replicas. DiskSuite requires at least half of the replicas to be present in order to continue to operate.
· 51% of the replicas must be present in order to reboot.
· Replicas should be spread across disks and controllers where possible.
· In a three drive configuration, at least one replica should be on each disk, thus allowing for a one disk failure.
· In a two drive configuration, such as the one we present here, there must be at least two replicas per disk. If there were only three and the disk which held two of them failed, there would not be enough information for DiskSuite to function and the system would panic.

Here we will create our state replicas using the metadb command:

# metadb -a -f /dev/dsk/c0t0d0s3
# metadb -a /dev/dsk/c0t0d0s5
# metadb -a /dev/dsk/c0t0d0s6
# metadb -a /dev/dsk/c0t1d0s3
# metadb -a /dev/dsk/c0t1d0s5
# metadb -a /dev/dsk/c0t1d0s6

The -a and -f options used together create the initial replica. The -a option attaches a new database device and automatically edits the appropriate files.

Initializing Submirrors

Each mirrored meta device contains two or more submirrors. The meta device gets mounted by the operating system rather than the original physical device. Below we will walk through the steps involved in creating metadevices for our primary filesystems. Here we create the two submirrors for the / (root) filesystem, as well as a one way mirror between the meta device and its first submirror.

# metainit -f d10 1 1 c0t0d0s0
# metainit -f d20 1 1 c0t1d0s0
# metainit d0 -m d10

The first two commands create the two submirrors. The -f option forces the creation of the submirror even though the specified slice is a mounted filesystem. The second two options 1 1 specify the number of stripes on the metadevice and the number of slices that make up the stripe. In a mirroring situation, this should always be 1 1. Finally, we specify the logical device that we will be mirroring.

After mirroring the root partition, we need to run the metaroot command. This command will update the root entry in /etc/vfstab with the new metadevice as well as add the appropriate configuration information into /etc/system. Ommitting this step is one of the most common mistakes made by those unfamiliar with DiskSuite. If you do not run the metaroot command before you reboot, you will not be able to boot the system!

# metaroot d0

Next, we continue to create the submirrors and initial one way mirrors for the metadevices which will replace the swap, and /var partitions.

# metainit -f d11 1 1 c0t0d0s1
# metainit -f d21 1 1 c0t1d0s1
# metainit d1 -m d11
# metainit -f d14 1 1 c0t0d0s4
# metainit -f d24 1 1 c0t1d0s4
# metainit d4 -m d14
# metainit -f d17 1 1 c0t0d0s7
# metainit -f d27 1 1 c0t1d0s7
# metainit d7 -m d17

Updating /etc/vfstab

The /etc/vfstab file must be updated at this point to reflect the changes made to the system. The / partition will have already been updated through the metaroot command run earlier, but the system needs to know about the new devices for swap and /var. The entries in the file will look something like the following:

/dev/md/dsk/d1 - - swap - no -
/dev/md/dsk/d4 /dev/md/rdsk/d4 /var ufs 1 yes -
/dev/md/dsk/d7 /dev/md/rdsk/d7 /export ufs 1 yes -

Notice that the device paths for the disks have changed from the normal style
/dev/dsk/c#t#d#s# and /dev/rdsk/c#t#d#s# to the new metadevice paths,
/dev/md/dsk/d# and /dev/md/rdsk/d#.

The system can now be rebooted. When it comes back up it will be running off of the new metadevices. Use the df command to verify this. In the next step we will attach the second half of the mirrors and allow the two drives to synchronize.

Attaching the Mirrors
Now we must attach the second half of the mirrors. Once the mirrors are attached it will begin an automatic synchonization process to ensure that both halves of the mirror are identical. The progress of the synchonization can be monitored using the metastat command. To attach the submirrors, issue the following commands:

# metattach d0 d20
# metattach d1 d21
# metattach d4 d24
# metattach d7 d27

Final Thoughts

With an eye towards recovery in case of a future disaster it may be a good idea to find out the physical device path of the root partition on the second disk in order to create an Open Boot PROM (OBP) device alias to ease booting the system if the primary disk fails.

In order to find the physical device path, simply do the following:

# ls -l /dev/dsk/c0t1d0s0

This should return something similar to the following:

/sbus@3,0/SUNW,fas@3,8800000/sd@1,0:a

Using this information, create a device alias using an easy to remember name such as altboot. To create this alias, do the following in the Open Boot PROM:
ok nvalias altboot /sbus@3,0/SUNW,fas@3,8800000/sd@1,0:a

It is now possible to boot off of the secondary device in case of failure using boot altboot from the OBP.

Gigabit Ethernet Configuration

These days all the newer Sun Systems ship with GE (Gigabit Ethernet) Port. Let me give you a quick run down on how to go about configuring the GE Port.

First, to make sure that your Network Interface Card is actually GE Supported, run the following command:

# kstat ce | more
module: ce instance: 0
name: ce0 class: net
adv_cap_1000fdx 1
adv_cap_1000hdx 1
adv_cap_100T4 0
adv_cap_100fdx 1
adv_cap_100hdx 1
adv_cap_10fdx 1
adv_cap_10hdx 1
adv_cap_asmpause 0
adv_cap_autoneg 1
adv_cap_pause 0

You the line adv_cap_1000fdx, this means that the interface support GE link. For better through put, i would suggest you to use a Cat-6 cable instead of Cat-5e cable for better results. Cat-5e has low MHz frequency as compared to Cat-6, so Cat-5e can actually be a bottle neck for you if network traffic is high. Next we go about configuring the interface. Dont worry, its pretty simple and straight forward.

The ndd is a nice little utility used to examine and set kernel parameters, namely the TCP/IP drivers. Most kernel parameters accessible through ndd can be adjusted without rebooting the system. To see which parameters are available for a particular driver, use the following ndd command:

# ndd /dev/ce \?

Here /dev/ce is the name of the driver and command lists the parameters for this particular driver. Use of backslash in from of "?" prevents the shell from interpreting the question mark as a special character. However, in most cases even ignoring backslash should give you same results.

Some Interpretations-

# ndd -set /dev/ce instance 2
Interpretation:
Choose ce2 network interface to set parameters.

# ndd -get /dev/ce link_mode
Interpretation:
0 -- half-duplex
1 -- full-duplex

# ndd -get /dev/ce link_speed
Interpretation:
0 -- 10 Mbit
1 -- 100 Mbit
1000 -- 1 Gbit

Usually in most cases, if you enable your network interface to adv_autoneg_cap, it should detect the GE Connection and jump your interface to 1000mbps. But in some cases it might not. In such a situation, i would strongly suggest you to set force GE link on the switch. However, even if that doesnt work, then move to forcing your NIC to GE link. Below steps can be followed.

To Switch the NIC to Auto Negotiation-

ndd -set /dev/ce instance 2
ndd -set /dev/ce adv_1000fdx_cap 1
ndd -set /dev/ce adv_1000hdx_cap 0
ndd -set /dev/ce adv_100fdx_cap 1
ndd -set /dev/ce adv_100hdx_cap 0
ndd -set /dev/ce adv_10fdx_cap 0
ndd -set /dev/ce adv_10hdx_cap 0
ndd -set /dev/ce adv_autoneg_cap 1

To force your NIC to 1000fdx

ndd -set /dev/ce instance 2
ndd -set /dev/ce adv_1000fdx_cap 1
ndd -set /dev/ce adv_1000hdx_cap 0
ndd -set /dev/ce adv_100fdx_cap 0
ndd -set /dev/ce adv_100hdx_cap 0
ndd -set /dev/ce adv_10fdx_cap 0
ndd -set /dev/ce adv_10hdx_cap 0
ndd -set /dev/ce adv_autoneg_cap 0

This should do. In case you want to make these changes permanent, i would suggest that you, create a file /etc/init.d/nddconfig and add following entries into the file-

#!/bin/sh

ndd -set /dev/ce instance 2
ndd -set /dev/ce adv_1000fdx_cap 0
ndd -set /dev/ce adv_1000hdx_cap 0
ndd -set /dev/ce adv_100fdx_cap 1
ndd -set /dev/ce adv_100hdx_cap 0
ndd -set /dev/ce adv_10fdx_cap 0
ndd -set /dev/ce adv_10hdx_cap 0
ndd -set /dev/ce adv_autoneg_cap 0

# ln -s /etc/init.d/nddconfig /etc/rc3.d/S31nddconfig

NOTE: The /etc/system settings are not supported for configuring ce Ethernet adapters during system startup; you may either use ndd commands in an /etc/rc?.d script or create a /platform/sun4u/kernel/drv/ce.conf file with appropriate settings.

Please feel free to post your questions in the comments section if you have any. I would be happy to answer them.

BLOG Maintained by - Vishal Sharma | GetQuickStart