Monday, April 6, 2009

Step by Step: Migrate root UFS file system to ZFS

Solaris 10 10/08 is released and one of the great features to come with it is ZFS boot.

People have been waiting for this for a long time, and will naturally be eager to migrate their root filesystem from UFS to ZFS. This article will detail how you can do this using Live Upgrade. This will allow you to perform the migration with the least amount of downtime, and still have a safety net in case something goes wrong.

These instructions are aimed at users with systems ALREADY running Solaris 10 10/08 (update 6)

Step 1: Create the Root zpool

The first thing you need to do is create your disk zpool. It MUST exist before you can continue, so create and verify it:

# zpool create rootpool c1t0d0s0
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rootpool 10G 73.5K 10.0G 0% ONLINE -
#

If the slice you’ve selected currently has another filesystem on it, eg UFS or VxFS, you’ll need to use the -f flag to force the creation of the ZFS filesystem.

You can use any name you like. I’ve chosen rootpool to make it clear what the pool’s function is.

Step 2:Create The Boot Environments (BE)

Now we’ve got our zpool in place, we can create the BEs that will be used to migrate the current root filesystem across to the new ZFS filesystem.

Create the ABE as follows:

# lucreate -c ufsBE -n zfsBE -p rootpool

This command will create two boot environments where:

- ufsBE is the name your current boot environment will be assigned. This can be anything you like and is your safety net. If something goes wrong, you can always boot back to this BE (unless you delete it).
- zfsBE is the name of your new boot environment that will be on ZFS and…
- rootpool is the name of the zpool you create for the boot environment.

This command will take a while to run as it copies your ufsBE to your new zfsBE and will produce output similar to the following if all goes well:

# lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named .
Creating initial configuration for primary boot environment .
The device
is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name PBE Boot Device .
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device
is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Creating file systems on boot environment .
Creating file system for in zone on .
Populating file systems on boot environment .
Checking selection integrity.
Integrity check OK.
Populating contents of mount point
.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment .
Creating compare database for file system
.
Updating compare databases on boot environment .
Making boot environment bootable.
Creating boot_archive for /.alt.tmp.b-7Tc.mnt
updating /.alt.tmp.b-7Tc.mnt/platform/sun4u/boot_archive
Population of boot environment successful.
Creation of boot environment successful.
#

The x86 output it not much different. It’ll just include information about updating GRUB.

Update: You may get the following error from lucreate:

ERROR: ZFS pool does not support boot environments.

This will be due to the label on the disk.

You need to relabel your root disks and give them an SMI label. You can do this using “format -e”, select the disk, then go to “label” and select “[0] SMI label”. This should be all that’s needed, but whilst you’re at it, you may as well check your partition table is still as you want. If not, make your changes and label the disk again.

For x86 system, you need to ensure your disk has an fdisk table.

You should now be able to perform the lucreate.

The most likely reason for your disk having an EFI label is it’s probably been used by ZFS as a whole disk before. ZFS uses EFI labels for whole disk usage, however you need an SMI label for your root disks at the moment (I believe this may change in the future).

Once the the lucreate has completed, you can verify your Live Upgrade environments with lustatus:

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufsBE yes yes yes no -
zfsBE yes no no yes -
#

Step 3: Activate and Boot from ZFS zpool

We’re almost done. All we need to do now is activate our new ZFS boot environment and reboot:
# luactivate zfsBE
# init 6

NOTE: Ensure you reboot using “init 6” or “shutdown -i6“. Do NOT use “reboot”

Remember, if you’re on SPARC, you’ll need to set the appropriate boot device at the OBP. luactivate will remind you.

You can verify you’re booted from the ZFS BE using lustatus:

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufsBE yes no no yes -
zfsBE yes yes yes no -
#

At this point you can delete the old ufsBE if all went well. You can also re-use that old disk/slice for anything you want like adding it to the rootpool to create a mirror. The choice is yours, but now you have your system booted from ZFS and all it’s wonderfulness is available on the root filesystem too.

0 comments:

BLOG Maintained by - Vishal Sharma | GetQuickStart