Friday, July 31, 2009

ZFS Datasets and "zfs list"

Any of you who have looked at the zfs(1M) man page will have come across the term "dataset".

A dataset can be:
a) a file system
b) a volume
c) a snapshot

Every time we create a ZFS file system we are actually creating a dataset with a setting of "type=filesystem".

Every pool starts out with a single dataset with a name that is the same as the pool name.

E.g. When we create a pool called "ttt" we automatically have a dataset called "ttt" which by default has a mounted file system called /ttt.

We can change mount point but we can not change the name of this dataset.
Every time we add a new dataset to a pool it must be a "child" of an existing dataset.
E.g. A new dataset called "ttt/xxx" can be created which is a "descendant" of the "ttt" dataset.

We can then create "ttt/xxx/yyy" which is a descendant of both "ttt" and "ttt/xxx"

It is not possible to create a dataset called "ttt/xxx/yyy" if "ttt/xxx" does not already exist.

All of the datasets we have created thus far have been mounted file systems; today won't be any different.

We will look at volumes and snapshots another day.

Today we will look at the "zfs list" command. This command is used to list ZFS datasets.

First create some temp files and a new pool:
# mkfile 119M /tmp/file1
# mkfile 119M /tmp/file2
# zpool create ttt /tmp/file1 /tmp/file2

And let's create a 50MB file in our new /ttt file system.

# mkfile 50M /ttt/50_meg_of_zeros

Now run "df -h" to see that 50MB is used.

# df -h /ttt
Filesystem Size Used Available Capacity Mounted on
ttt 196M 50M 146M 26% /ttt

If you don't see 50M under the "Used" column, try running the command again.
There may be a time lapse between creating the 50_meg_of_zeros file and having "df" report 50MB of data in the file system.

Now let's use "zfs list" to get additional information about the "ttt" dataset.

# zfs list ttt
ttt 50.2M 146M 50.0M /ttt

Note that the information provided is similar to the data provided by "df -h".
The "REFER" column shows us how much data is used by this data set.

The "USED" column refers to the amount of data used by this dataset and all the descendants of this dataset.

Since we only have a top level dataset with no descendants, the two values are roughly equal. Small discrepancies are generally due to overhead.

The "AVAIL" column shows the amount of available space in the dataset, which (not coincidentally) is equal to the free space in the pool.

Now create some file systems… this syntax works on Solaris 10 08/07 with recent patches.

# zfs create -o mountpoint=/apps_test ttt/apps # descendant of ttt
# zfs create -o mountpoint=/work_test ttt/work # descendant of ttt
# zfs create -o mountpoint=/aaa ttt/work/aaa # descendant of both ttt and ttt/work
# zfs create -o mountpoint=/bbb ttt/work/bbb # descendant of both ttt and ttt/work

If you are using an older version of Solaris 10 you may need to use the following syntax to achieve the same thing:

# zfs create ttt/apps # descendant of ttt
# zfs set mountpoint=/apps_test ttt/apps
# zfs create ttt/work # descendant of ttt
# zfs set mountpoint=/work_test ttt/work
# zfs create ttt/work/aaa # descendant of both ttt and ttt/work
# zfs set mountpoint=/aaa ttt/work/aaa
# zfs create ttt/work/bbb
# zfs set mountpoint=/bbb ttt/work/bbb # descendant of both ttt and ttt/work

Regardless of which syntax you used to create the file systems, let's move on and create another 50MB file in one of our file systems.

# mkfile 50M /aaa/50_meg_of_zeros

# df -h | egrep 'ttt|Filesystem'
Filesystem Size Used Available Capacity Mounted on
ttt 196M 50M 96M 35% /ttt
ttt/apps 196M 24K 96M 1% /apps_test
ttt/work 196M 24K 96M 1% /work_test
ttt/work/aaa 196M 50M 96M 35% /aaa
ttt/work/bbb 196M 24K 96M 1% /bbb

Now we have two file systems ("/ttt" and "/aaa") each showing a utilization of 50GB. Nothing surprising so far.

Note you could use "df -F zfs -h"… but that will show all zfs file systems on the system. The "egrep" syntax used above limits us to the file systems that are part of the "ttt" pool.

Now lets rerun "zfs list ttt"

# zfs list ttt
ttt 100M 95.6M 50.0M /ttt

Note that the "REFER" column is still showing 50MB because /ttt still contains only one 50MB file.

But the "USED" column now shows 100MB. Remember, the USED column represents the amount of data in the dataset and all of the descendants of the dataset.

We have 50MB in "ttt" (mounted under /ttt) and 50MB in "ttt/work/aaa" (mounted under /aaa), the total space consumed by ttt and its descendants is 100MB.

We can also use "zfs list" to look at specific datasets in the pool.

# zfs list ttt/work
ttt/work 50.1M 95.6M 24.5K /work_test

Note that the "ttt/work" dataset (mounted under /work_test) contains no data so the "REFER" column shows roughly 0MB.

But the USED value of 50MB reflects the data from the descendant "ttt/work/aaa".

# zfs list ttt/work/aaa
ttt/work/aaa 50.0M 95.6M 50.0M /aaa

The "ttt/work/aaa" dataset (mounted under /aaa) contains one 50MB file but. The dataset has no descendants. Therefore both USED and REFER show 50M.

If we want to recursively list all datasets that are part of the "ttt" pool (and exclude all other pools) we need to use the "-r" option and specify the pool name.

# zfs list -r ttt
ttt 100M 95.6M 50.0M /ttt
ttt/apps 24.5K 95.6M 24.5K /apps_test
ttt/work 50.1M 95.6M 24.5K /work_test
ttt/work/aaa 50.0M 95.6M 50.0M /aaa
ttt/work/bbb 24.5K 95.6M 24.5K /bbb

Clean up time

# zpool destroy ttt
# rmdir /apps_test
# rmdir /work_test
# rmdir /aaa
# rmdir /bbb
# rm /tmp/file*

Read the rest of this entry...

Bookmark and Share My Zimbio

Thursday, July 30, 2009

ZFS Tip: "zpool list", "zpool status", "zpool iostat" & "zpool history"

For those who asked, I will convert these tips to html and post on termite. If I can get this done today I will provide a URL tomorrow.
For those who did not try last week's exercises, I am afraid you will not be eligible for certificates, plaques, trophies or awards.
But the good news is that it is not too late to catch up. If you cut and paste, each exercise should take roughly two minutes.
Today we will look at some "status" or "informational" commands that will give us more information about our ZFS pools. First we need a pool and some file systems to work with. As per usual, we can build the pool on top of files instead of real disk.

Try this on a server near you:

# mkfile 119M /tmp/file1
# mkfile 119M /tmp/file2
# zpool create ttt /tmp/file1 /tmp/file2 # zfs create -o mountpoint=/apps_test ttt/apps # zfs create -o mountpoint=/work_test ttt/work

And lets put some data in one of our file systems.

# mkfile 50M /apps_test/50_meg_of_zeros

First list all the ZFS pools on the system:

# zpool list
tools 4.97G 121M 4.85G 2% ONLINE -
ttt 228M 50.2M 178M 22% ONLINE -

Notice that my system has two pools. The "tools" pool is created by jumpstart.

If we only want information for the "ttt" pool we can type:

# zpool list ttt
ttt 228M 50.2M 178M 22% ONLINE

The next command will list all the vdevs in the pool; our pool currently has two vdevs (each vdev is comprised for a 119MB file).

# zpool status ttt
pool: ttt
state: ONLINE
scrub: none requested
ttt ONLINE 0 0 0
/tmp/file1 ONLINE 0 0 0
/tmp/file2 ONLINE 0 0 0
errors: No known data errors

If you have a "tools" pool on your system you can run "zpool status tools" and see how a mirrored vdev is displayed. I promise I will dig into mirroring soon… but not today.

If we want to see how much data is in each vdev we can use another command:

# zpool iostat -v ttt
capacity operations bandwidth
pool used avail read write read write
------------ ----- ----- ----- ----- ----- -----
ttt 50.2M 178M 0 1 15 42.1K
/tmp/file1 24.1M 89.9M 0 0 6 20.2K
/tmp/file2 26.1M 87.9M 0 0 8 21.8K
------------ ----- ----- ----- ----- ----- -----

Notice that our 50MB file has been spread evenly over the two vdevs.
We can also add a time duration to repeatedly display statistics (similar to iostat(1M)).

# zpool iostat -v ttt 5 # this will display statistics every 5 seconds.

We can use "zpool iostat" to see how new writes are balanced over all vdevs in the pool.
Let's first add a third vdev to the pool.

# mkfile 119M /tmp/file3
# zpool add ttt /tmp/file3
# zpool iostat -v ttt
capacity operations bandwidth
pool used avail read write read write
------------ ----- ----- ----- ----- ----- -----
ttt 50.3M 292M 0 0 1 3.26K
/tmp/file1 24.1M 89.9M 0 0 0 1.51K
/tmp/file2 26.1M 87.9M 0 0 0 1.62K
/tmp/file3 8K 114M 0 18 0 80.9K
------------ ----- ----- ----- ----- ----- -----

Now we have an empty vdev. Notice that existing data has not been redistributed.
But if we start writing new data, the new data will be distributed over all vdevs (unless one or more vdevs is full).

# mkfile 50M /apps_test/50_meg_of_zeros_2
# zpool iostat -v ttt
capacity operations bandwidth
pool used avail read write read write
------------ ----- ----- ----- ----- ----- -----
ttt 100M 242M 0 0 1 6.06K
/tmp/file1 39.5M 74.5M 0 0 0 2.37K
/tmp/file2 41.6M 72.4M 0 0 0 2.48K
/tmp/file3 19.2M 94.8M 0 6 0 183K
------------ ----- ----- ----- ----- ----- -----

Let's close off with a self eplanatory command:

# zpool history ttt

History for 'ttt':
2008-02-12.11:27:02 zpool create ttt /tmp/file1 /tmp/file2
2008-02-12.11:29:29 zfs create -o mountpoint=/apps_test ttt/apps
2008-02-12.11:29:32 zfs create -o mountpoint=/work_test ttt/work
2008-02-12.16:31:00 zpool add ttt /tmp/file3
Now if you have a "tools" pool on your system, and you want to see how Jumpstart set it up, try running "zpool history tools".

Clean up time already:

# zpool destroy ttt
# rmdir /apps_test
# rmdir /work_test
# rm /tmp/file*

Read the rest of this entry...

Bookmark and Share My Zimbio

Wednesday, July 29, 2009

E10K: Powering on/off procedures

Powering off individual domains

1. Connect to the correct domain
1. Login to ssp as ssp and enter ${domain_name} at the Please enter SUNW_HOSTNAME: prompt.
2. If already logged into the ssp, enter domain_switch ${domain_name} in a command window.
2. ID proper domain boards by executing domain_status and noting the board numbers under the heading SYSBDS.

${SSP}:${Domain}% domain_status


domain1 Ultra-Enterprise-10000 Plat_name 2.6 4 5

domain2 Ultra-Enterprise-10000 Plat_name 2.6 0 1

domain3 Ultra-Enterprise-10000 Plat_name 2.6 3

domain4 Ultra-Enterprise-10000 Plat_name 2.6 6 8

domain5 Ultra-Enterprise-10000 Plat_name 2.6 9 10

# Bring the domain down.

1. Open another command window, issue domain_switch if necessary.
2. Execute netcon to start up the domain console.
3. Log in as root.
4. Execute sync;sync;sync;init 0
5. Once the system is at the OK prompt, continue.

# In the first command window, enter power -off -sb ${brd_numbers[*]}. Board numbers are listed together with space separators.

Powering off the entire E10K

1. Bring down all E10K domains:
1. Open a command window.
2. Issue domain_switch ${domain_name} to connect to the correct domain.
3. Issue netcon to open the domain console.
4. Log in as root.
5. Execute sync;sync;sync;init 0
6. Once the domain is at the OK prompt, exit the netconsole by issuing ~. (Press/hold tilde while pressing period).
7. Iterate through the above until all domains are down.
2. Open a command window on shamash and enter power -B -off -all. When the command completes, you will hear the power switches changing position in the E10K cabinet.

3. Power off the SSP:
1. su - root
2. sync;sync;sync;init 0
3. When the OK prompt appears, turn off the power to the ssp.

Powering on individual domains

1. Open two command windows; issue domain_switch ${domain_name} as necessary.
2. In one of the windows, ID the system boards by issuing domain_status and noting the board numbers under the heading SYSBDS.

${SSP}:${Domain}% domain_status


domain1 Ultra-Enterprise-10000 Plat_name 2.6 4 5

domain2 Ultra-Enterprise-10000 Plat_name 2.6 0 1

domain3 Ultra-Enterprise-10000 Plat_name 2.6 3

domain4 Ultra-Enterprise-10000 Plat_name 2.6 6 8

domain5 Ultra-Enterprise-10000 Plat_name 2.6 9 10

3. Issue power -on -sb ${board_numbers[*]}. Board numbers are listed together with space separators.
4. Issue bringup -A off -l7. NOTE: Space between the '-A' and 'off' and lower case L in the '-l7'.
5. In the other window, issue netcon. Wait for the OK prompt to appear, then execute boot.
6. Wait for the system to come up completely, then exit the netconsole by issuing ~. (Press/hold tilde while pressing period).

Powering on the entire E10K

1. Power on the SSP; at the OK prompt, type boot
2. Flip the power switches on the E10K.
3. Log in as ssp
1. Enter ${Plat_name} at the Please enter SUNW_HOSTNAME: prompt.
2. Open a command window; execute power -on -all
3. Open another command windows. foreach domain do:
1. In one window, execute domain_switch ${domain_name}
2. Execute bringup -A off -l7 NOTE: Space between the '-A' and 'off' and lower case L in the '-l7'.
3. In the other command window, execute domain_switch ${domain_name} followed by netcon
4. When the OK prompt appears, execute boot


1. ONLY BRING UP ONE SYSTEM AT A TIME Otherwise, the boot process will take longer than it already does!


Read the rest of this entry...

Bookmark and Share My Zimbio

Tuesday, July 28, 2009

Moving a pool to a different server

Today we are going to move a ZFS pool from one server to another. There are several ways we could execute this exercise:

a) we could create a pool on SCSI or SAS drives and physically move the drives from one server to another
b) we could create a pool on SAN disk and then ask the Storage team to rezone the disks to another server.
c) we could create a pool on a bunch of memory sticks and move the memory sticks.
d) we could create a pool on 64MB files and ftp the files from one server to the other.

Let's use option "d" (64MB files) because we don't need special hardware.

I encourage you to give this a try; you need a pair of servers; both servers should be setup with the same release of Solaris 10.

The servers I used are called gnat ant epoxy. One is sparc, the other is x86.

If you prefer to use two sparc boxes or two x86 boxes that is fine too.

gnat is a T2000 loaded with Solaris 10 08/07-sparc
epoxy is an X4200 loaded with Solaris 10 07/07-x86

First create a pool with two vdevs… each vdev will consist of a single 64MB file (no mirroring or raidz today)

gnat# mkfile 64M /tmp/file1
gnat# mkfile 64M /tmp/file2
gnat# zpool create ttt /tmp/file1 /tmp/file2

Now add a couple of file systems mounted under "/apps_test", and "/work_test".

Earlier this week we used one command to create file systems and a second command to rename the mount points.

Today we are combining the two steps into a single command to save typing.

gnat# zfs create -o mountpoint=/apps_test ttt/apps
gnat# zfs create -o mountpoint=/work_test ttt/work

gnat# df -h | egrep 'ttt|Filesystem'
Filesystem Size Used Available Capacity Mounted on
ttt 87M 24K 87M 1% /ttt
ttt/apps 87M 24K 87M 1% /apps_test
ttt/work 87M 24K 87M 1% /work_test

Write some data to the file systems.

gnat# echo "hello world #1" > /apps_test/testfile_in_apps
gnat# echo "hello world #2" > /work_test/testfile_in_work
gnat# ls -lR /*test*
total 2
-rw-r--r-- 1 root root 15 Feb 8 15:21 testfile_in_apps
total 2
-rw-r--r-- 1 root root 15 Feb 8 15:22 testfile_in_work

Now export the pool.

This is similar to exporting a Veritas volume group… but we don't need to bother unmounting the file systems first.

The pool will be left in a state where it can be moved to another system.

gnat# zpool export ttt

If we were using real disk, we would now physically or logically move the disk to the other server.

But since we are using 64MB files, we can simply copy them to the /tmp directory on another server.

If you use ftp make sure to do a "binary" transfer! I used scp.

gnat# scp /tmp/file1 myusername@e_p_o_x_y:/tmp
gnat# scp /tmp/file2 myusername@e_p_o_x_y:/tmp
Log into the second server. Check that the files are intact:

epoxy# ls -l /tmp/file*
-rw------- 1 waltonch unixadm 67108864 Feb 8 15:31 /tmp/file1
-rw------- 1 waltonch unixadm 67108864 Feb 8 15:35 /tmp/file2

Now import the pool.

If we were using real disks, we could ask the zpool command to examine all new disks searching for "importable" pools.

But since we are using files, we need to tell the zpool command where to look. If you have a million files in /tmp this may take a while. If /tmp is relatively empty it should be quick.

epoxy# zpool import -d /tmp ttt

Now check it out:

epoxy# df -h | egrep 'ttt|Filesystem'
Filesystem Size Used Available Capacity Mounted on
ttt/apps 87M 26K 87M 1% /apps_test
ttt 87M 24K 87M 1% /ttt
ttt/work 87M 26K 87M 1% /work_test

epoxy# ls -lR /*test*
total 2
-rw-r--r-- 1 root root 15 Feb 8 15:21 testfile_in_apps

total 2
-rw-r--r-- 1 root root 15 Feb 8 15:22 testfile_in_work

Many of us have exported Veritas volume groups from one machine and imported them into another.

But with veritas we had to create the mount points, edit the vfstab file, and manually mount the file systems.

ZFS did it all! And notice that ZFS did not complain going from sparc to x86. Pretty cool folks; pretty cool.

Now we must cleanup on both servers:
epoxy# zpool destroy ttt
epoxy# rmdir /apps_test
epoxy# rmdir /work_test
epoxy# rm /tmp/file*

gnat# rmdir /apps_test
gnat# rmdir /work_test
gnat# rm /tmp/file*

Read the rest of this entry...

Bookmark and Share My Zimbio

Monday, July 27, 2009

ZFS Tip: Multiple vdevs in a pool

Today we will look at spanning a pool over multiple disks (or for demo purposes: multiple 64MB files).

The basic building block of a ZFS pool is called a "vdev" (a.k.a. "virtual device")
A vdev can be one of:

• a single "block device" or a "regular file" (this is what we have used so far)
• a set of mirrored "block devices" and/or "regular files"
• a "raidz" group of "block devices" and/or "regular files" (raidz is an improved version of raid5)

A pool can contain multiple vdevs.

• The total size of the pool will be equal to sum of all vdevs minus overhead.
• Vdevs do not need to be the same size.

Let's jump to it and create a pool with two vdevs… where each vdev is a simple 64MB file. In this case our pool size will be 128MB minus overhead. We will leave mirroring and raidz for another day.

Please try this on an unused Solaris 10 box:

Create two 64MB temp files (if you don't have space in /tmp you can place the files elsewhere… or even use real disk partitions)

# mkfile 64M /tmp/file1
# mkfile 64M /tmp/file2

Create a ZFS pool called "ttt" with two vdevs. The only difference from yesterday's syntax is that we are specifying two 64MB files instead of one.

# zpool create ttt /tmp/file1 /tmp/file2

And create a extra file system called ttt/qqq using the default mount point of /ttt/qqq.

# zfs create ttt/qqq
# df -h | egrep 'ttt|Filesystem' # sorry for inconsistancies: yesterday I used "df -k"; today I switched to "df -h"

Filesystem Size Used Available Capacity Mounted on
ttt 87M 25K 87M 1% /ttt
ttt/qqq 87M 24K 87M 1% /ttt/qqq

We now have 87MB of usable space; this is a bit more than double what we had with only one vdev so it seems the ratio of overhead to usable space improves as we add vdevs.
But again, overhead is generally high because we are dealing with tiny (64MB) vdevs.
Okay.. Lets fill up /ttt/qqq which a bunch of zeros. This will take a minute or two to run and will generate an error.

# dd if=/dev/zero of=/ttt/qqq/large_file_full_of_zeros
write: No space left on device
177154+0 records in
177154+0 records out

We are not using quotas, so ttt/qqq was free to consume all available space. i.e. both /ttt and /ttt/qqq are now full file systems even though /ttt is virtually empty.
# df -h | egrep 'ttt|Filesystem'

Filesystem Size Used Available Capacity Mounted on
ttt 87M 25K 0K 100% /ttt
ttt/qqq 87M 87M 0K 100% /ttt/qqq

# mkfile 109M /tmp/file3

Let's add it to the pool

# zpool add ttt /tmp/file3

If we had been using Veritas or SVM we would have had a three step process: adding disk, resizing volumes, and growing the file systems.

With ZFS, as soon as disk space is added to the pool, the space becomes available to all the file systems in the pool.

So after adding a 109MB vdev to our pool, both /ttt and /ttt/qqq instantly show 104MB of available space. Very cool.

# df -h | egrep 'ttt|Filesystem'

Filesystem Size Used Available Capacity Mounted on
ttt 191M 25K 104M 1% /ttt
ttt/qqq 191M 87M 104M 46% /ttt/qqq

Notice that when talking about pools and vdevs today, I did not mention the words "striping" (raid-0) or "concatenation"… terms that we are used to seeing in the SVM and Veritas worlds.

ZFS pools don't use structured stripes or concatenations. Instead, the a pool will dynamically attempt to balance the data over all its vdevs.

If we started modifying data in our ttt pool, the pool would eventually balance itself
out so the data will be spread evenly over the entire pool.

i.e. No hot spots!

Time for cleanup.

# zpool destroy ttt
# rm /tmp/file[1-3]

Since we used the default mount points today, the directories "/ttt" and "/ttt/qqq" have been removed for us, so there is no more cleanup to do.

Read the rest of this entry...

Bookmark and Share My Zimbio

Sunday, July 26, 2009

Creating multiple ZFS file systems in a single pool

All of us have experienced the following scenario.
Developers ask for two file systems with specific sizes:
/aaa 1GB
/bbb 5GB
Let’s assume that we only have 6GB available and we create file systems as requested.
A few days later /aaa is full and /bbb contains almost nothing. The developers ask for more space in /aaa.
Do you purchase new disk?
Do you backup, resize, and restore?

Or if you are running VXFS/VXVM do you start running convoluted commands to resize the file systems?

Let's look at what the situation would be like if we had used ZFS.

A ZFS pool is capable of housing multiple file systems… all file systems share the same underlying disk space.

• No rigid boundaries are created between file systems; the data from each file system is evenly distributed throughout the pool.
• By default, any file system is allowed to use any (or all) of the free space in the pool.
• If data is deleted from a file system, the space is returned to the pool as free space.

So in the example above, if we had created a 6GB pool housing both /aaa and /bbb, either file system could potentially grow to almost 6GB.

We would not get a report of a full file system until the entire pool is full. The pool won't fill up until the total data written to both file systems is roughly equal to the size of the pool.

Thus there would be nothing stopping the developers from placing 4GB in /aaa and 1GB in /bbb… this would leave approximately 1GB of space free for either file system to consume.

The behaviour can be adjusted with "reservations" and "quotas"… but lets leave that for another day.

So let's see how to create a ZFS pool with multiple file systems. Normally we would create the pool on one or more real disks, but for test purposes we can use a 64GB file . Try this on an unused server:

Create a 64MB temp file
# mkfile 64M /tmp/file1

Create a ZFS pool called "ttt" on top of the temp file.
# zpool create ttt /tmp/file1

Run df to make sure /ttt exists

# df -k | egrep 'ttt|Filesystem'
Filesystem 1024-blocks Used Available Capacity Mounted on
ttt 28160 24 28075 1% /ttt

Now create two new file systems within pool ttt

# zfs create ttt/xxx
# zfs create ttt/yyy

Now view all three file systems:

# df -k | egrep 'ttt|Filesystem'

Filesystem 1024-blocks Used Available Capacity Mounted on
ttt 28160 27 27971 1% /ttt
ttt/xxx 28160 24 27971 1% /ttt/xxx
ttt/yyy 28160 24 27971 1% /ttt/yyy

Note that ZFS file systems within a pool must be created in a hierarchical structure. The ZFS pool (in this case "ttt") is always the root of the pool.

The mount points by default will share the same name as the file systems (prefixed with a "/").

But nobody wants to use /ttt/xxx or /ttt/yyy as mount points, so lets change the mount points.

# zfs set mountpoint=/aaa ttt/xxx
# zfs set mountpoint=/hello/world ttt/yyy
# df -k | egrep 'ttt|Filesystem'

Filesystem 1024-blocks Used Available Capacity Mounted on
ttt 28160 24 27962 1% /ttt
ttt/xxx 28160 24 27962 1% /aaa
ttt/yyy 28160 24 27962 1% /hello/world

Note that we did not have to create mount points or set anything up in /etc/vfstab. ZFS takes care or everything for us and life is great.

And to clean up… the commands are the same as before… but you may have to manually remove some of the mount points.

# zpool destroy ttt
# rm /tmp/file1
# rmdir /aaa
# rmdir /hello/world
# rmdir /hello

Read the rest of this entry...

Bookmark and Share My Zimbio

Saturday, July 25, 2009

Creating a simple ZFS pool and ZFS file system

ZFS was introduced in Solaris 10 06/06 (update2). No special license or software is required to use it. ZFS provides both "volume management" and a "file system" so it can be used in place of VXVM/VXFS and SVM/UFS

ZFS file systems are contained inside ZFS pools.
So… if you want to create a ZFS file system you first need to create a ZFS pool.

A ZFS pool is usually built top of one or more block devices (disk slices, memory sticks, etc), but for test or demonstration purposes a ZFS pool can be created on top of one or more regular files.

The minimum size of a block device or file is 64MB.

I encourage you to try the following three commands in the global zone of an unused Solaris 10 server… or if you are running Solaris 10 on a desktop or laptop you can try it there.

Create a 64MB temp file
# mkfile 64M /tmp/file1

Create a ZFS pool called "ttt" on top of the temp file.
# zpool create ttt /tmp/file1

Every new pool by default contains a mounted file system; the mount point is the same as pool name. To see this run:

# df -k /ttt


a) mkfs is not needed for ZFS file systems.
b) mount points do not need to be re-created with "mkdir".
c) ZFS file systems do not require entries in /etc/vfstab.
d) available space is smaller than the underlying storage… this is due to overhead. Overhead for very small file systems is high.
e) the device name reported by "df -k" is simply the pool name; it does not start with "/dev".
f) ZFS file systems start out completely empty; they do not contain "lost+found" directories.

Now just in case you created your ZFS pool and ZFS file system on a production server (which I won't endorse), here is how you clean up your tracks:

# zpool destroy ttt
# rm /tmp/file1

Read the rest of this entry...

Bookmark and Share My Zimbio

Friday, July 24, 2009

ZFS boot/root - bring on the clones

Today's ZFS tip is dedicated to anybody that has experienced corruption as a result of loading Solaris 10 patches.

Using ZFS cloning, it is possible to create bootable clones of / and /var.

A clone takes a few seconds to create, but could save hours or days if a patch installation does not go as planned.

Patching can be done on the clone or the original.
If the clone is corrupted, the rollback path is to simply boot the original.
If the original is corrupted, the rollback path is to simply boot the clone.

Let's have a look at where / and /var file systems are normally mounted:

# df -k / /var
Filesystem 1024-blocks Used Available Capacity Mounted on
rpool/ROOT/blue 32772096 1902300 12809372 13% /
rpool/ROOT/blue/var 32772096 354808 12809372 3% /var

The two datasets which house the / and /var file systems form an entity that is referred to as a "boot environment" (a.k.a. BE).

Note: other miscellaneous file systems in the root pool are not considered to be part of the boot environment.

e.g. /home is not part of the boot environment.

Using the power of ZFS copy-on-write cloning, we can clone the boot environment in a matter of seconds.

A cloned boot environment appears as a complete bootable and modifiable copy of our original operating system.

Sun's ZFS documentation assumes everybody will want to use "live upgrade" to clone the boot environment.

The advantage of live upgrade is that the clone can be done with two commands.

The disadvantage of live upgrade is that it may be slightly buggy.

I have elected to provide you with a procedure that does not use live upgrade.

Clones are built from snapshots. Snapshots require an arbitrary "snapname"; we will use today's date for the snapname.

# SNAPNAME=`date +%Y%m%d`

Create snapshots of / and /var. Both snapshots can be created synchronously with a single command. The process should take about half a second.

# zfs snapshot -r rpool/ROOT/blue@$SNAPNAME

Optionally view the snapshots.

# zfs list -t snapshot
rpool/ROOT/blue@20090723 0 - 1.81G -
rpool/ROOT/blue/var@20090723 0 - 346M -

Now let's create a new boot environment named "red" by creating clones from the snapshots.

We need to clone each of the two datasets separately; plan for about half a second per dataset.

# zfs clone rpool/ROOT/blue@$SNAPNAME rpool/ROOT/red
# zfs clone rpool/ROOT/blue/var@$SNAPNAME rpool/ROOT/red/var

By default, the mountpoints for both clones will be set to "legacy". We need to change the mount points to "/" and "/var" but we also want to disable automatic mounting so we don't end up with multiple datasets trying to use the same mount points… remember we already have existing datasets mounted under "/" and "/var".

# zfs set canmount=noauto rpool/ROOT/red
# zfs set canmount=noauto rpool/ROOT/red/var
# zfs set mountpoint=/ rpool/ROOT/red
# zfs set mountpoint=/var rpool/ROOT/red/var

If you are working on a sparc system, add two lines to the menu.list file. If you wish you can use vi instead of echo.

# echo "title red" >> /rpool/boot/menu.lst
# echo "bootfs rpool/ROOT/red" >> /rpool/boot/menu.lst
# more /rpool/boot/menu.lst
title blue
bootfs rpool/ROOT/blue
title red
bootfs rpool/ROOT/red
The menu.list file can be used at boot time to get a list of available boot environments.

Optionally view the list of file systems in rpool.

# zfs list -t filesystem -r -o name,type,mountpoint,mounted,canmount,origin rpool
rpool filesystem /rpool yes on -
rpool/ROOT filesystem legacy no on -
rpool/ROOT/blue filesystem / yes noauto -
rpool/ROOT/blue/var filesystem /var yes noauto -
rpool/ROOT/red filesystem / no noauto rpool/ROOT/blue@20090723
rpool/ROOT/red/var filesystem /var no noauto rpool/ROOT/blue/var@20090723
rpool/home filesystem /home yes on -
rpool/marimba filesystem /opt/Marimba yes on -
rpool/openv filesystem /usr/openv yes on -

For the most part, the clones appear as standard datasets, but the "origin" property shows us that they are cloned from a pair of snapshots.

Notice that we did not clone the datasets associated with /home, /opt/Marimba, /rpool, /rpool/ROOT, or /usr/openv.

These file systems are not part of a boot environment; but since the "canmount" property for each of these datasets is set to "on", these file systems will automatically be mounted regardless of which boot environment we boot from.

Our cloned boot environment is now in place and is fully bootable.
The system is still configured to mount / and /var from original boot environment (blue) on reboot.

The default boot environment can be changed with the following command:

# zpool set bootfs=rpool/ROOT/red rpool

On the next reboot the server will mount / and /var from the cloned boot environment (red).

The default can easily be changed back to blue if required:

# zpool set bootfs=rpool/ROOT/blue rpool

At this point, it is possible to boot either environment.
Any changes to any files in / or /var in either environment will not be reflected in the other environment.
Any changes to any files in /home, /rpool, etc. will show up in both environments because there is only one copy of these file systems.

That’s all for now.

Read the rest of this entry...

Bookmark and Share My Zimbio

Thursday, July 23, 2009

Erin Andrews secretly videotaped nude in hotel

ESPN reporter Erin Andrews was secretly videotaped in the nude while she was alone in a hotel room and the video was posted on the Internet, her lawyer and the network said.

The blurry five-minute video shows a nude blonde woman standing in front of a hotel room mirror. It’s unknown when or where it was shot.

Andrews’ lawyer, Marshall Grossman, says the 31-year-old reporter plans to seek criminal charges and file civil lawsuits against the unknown cameraman and anyone who publishes the material.

"While alone in the privacy of her hotel room, Erin Andrews was surreptitiously videotaped without her knowledge or consent," Grossman said in the statement. "She was the victim of a crime and is taking action to protect herself and help ensure that others are not similarly violated in the future."

A woman answering the phone Tuesday at Grossman’s office said he would have no further comment.

Andrews has covered hockey, college football, college basketball and Major League Baseball for the network since 2004, often as a sideline reporter during games.

A former dance team member at the University of Florida, Andrews was something of an Internet sensation even before the video’s circulation. She has been referred to as "Erin Pageviews" because of the traffic that video clips and photos of her generate, and Playboy magazine named her "sexiest sportscaster" in both 2008 and 2009.

She last appeared on the network as part of its ESPY Awards broadcast on Sunday, and is scheduled to be off until September, when she will be covering college football, ESPN spokesman Josh Krulewitz said.

"Erin has been grievously wronged here," Krulewitz said. "Our people and resources are in full support of her as she deals with this abhorrent act."

It was not clear when the video first appeared on the Internet. Most of the links to it had been removed by Tuesday.

Ephraim Cohen, a spokesman for the video portal Dailymotion, could not confirm the video had actually appeared on his company’s site, but said it may have been there months ago. He said a search for the name of the user who purportedly uploaded the video showed the person had opened an account in February, but had since closed it.

"As far as we can tell, the user took the account and the video down a while ago," he said.

Illegal videos often are posted to multiple sites such as YouTube and Dailymotion, which remove them as soon as they are found. The videos also often circulate on peer-to-peer or file-sharing sites, much like illegal music downloads.

Read the rest of this entry...

Bookmark and Share My Zimbio

ZFS boot/root - mirroring

Today we will see how mirroring can be used migrate an existing ZFS root pool to a new pair of disks.
This technique may be used to upgrade to larger drives.

Let's assume we have a ZFS root pool mirrored between c0t0d0 and c0t1d0.
We will go through the simple procedure of migrating to a pair new disks (c0t2d0 and c0t3d0)

First, verify the status of the pool.
# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
rpool ONLINE 0 0 0
c0t0d0s0 ONLINE 0 0 0
c0t1d0s0 ONLINE 0 0 0
errors: No known data errors

Run format and make sure the new disks have valid SMI labels

If there is an EFI label, it will need to be replaced with a standard SMI label; this can be done by running "format -e" and relabling the disk.

If the new and old disks are the same size, fmthard can be used to transfer the partition table.
# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t2d0s2
fmthard: New volume table of contents now in place.
# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t3d0s2
fmthard: New volume table of contents now in place.

If the new disks are not the same size as the old disks, the new disks must be manually partitioned using the format command.
Slice 0 on the new disks needs to be at least as big as slice 0 on the old disks.

You may with to view the partition table on one (or both) of the new drives
# echo verify | format c0t3d0
selecting c0t3d0
. . .
Volume name = < >
ascii name = <SUN72G cyl 14087 alt 2 hd 24 sec 424>
pcyl = 14089
ncyl = 14087
acyl = 2
nhead = 24
nsect = 424
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 14083 68.34GB (14084/0/0) 143318784
1 unassigned wu 0 0 (0/0/0) 0
2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wu 0 0 (0/0/0) 0
4 unassigned wu 0 0 (0/0/0) 0
5 unassigned wu 0 0 (0/0/0) 0
6 unassigned wu 0 0 (0/0/0) 0
7 unassigned wu 0 0 (0/0/0) 0

Add the new disks to the root pool. When specifying the source you can specify either c0t0d0s0 or c0t1d0s0.
# zpool attach rpool c0t0d0s0 c0t2d0s0
# zpool attach rpool c0t0d0s0 c0t3d0s0

Verify that the disks have been added.
# zpool status rpool
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 11.21% done, 0h0m to go
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t0d0s0 ONLINE 0 0 0
c0t1d0s0 ONLINE 0 0 0
c0t2d0s0 ONLINE 0 0 0
c0t3d0s0 ONLINE 0 0 0
errors: No known data errors
We now have a four way mirror!

After a few minutes, "zpool status" should show that mirroring (reslivering) has completed.
# zpool status rpool
pool: rpool
state: ONLINE
scrub: resilver completed after 0h1m with 0 errors on Fri Jul 17 15:56:38 2009
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t0d0s0 ONLINE 0 0 0
c0t1d0s0 ONLINE 0 0 0
c0t2d0s0 ONLINE 0 0 0
c0t3d0s0 ONLINE 0 0 0
errors: No known data errors

It is likely that future versions of ZFS will take care of installing the boot block (as is the case with SVM)), but for now it must be done manually.
For sparc systems run:
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t2d0s0
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t3d0s0
For x86 systems run:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t2d0s0
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t3d0s0
If you forget to run "installboot" or "installgrub", your sever will not boot from the new disks!

Configure the server to use one of the new disks as the boot disk.
For sparc systems this can be done using luxadm(1M). For x86 systems it will likely require a change in the BIOS.
# luxadm set_boot_dev -y /dev/dsk/c0t2d0s0

On a sparc system if you want primary and secondary boot devices you can try this:
# luxadm set_boot_dev -y /dev/dsk/c0t2d0s0
# BOOTDISK1=`eeprom boot-device | sed s/^.*=//`
# luxadm set_boot_dev -y /dev/dsk/c0t3d0s0
# BOOTDISK2=`eeprom boot-device | sed s/^.*=//`
# eeprom boot-device="$BOOTDISK1 $BOOTDISK2"

Rebooting is not mandatory!
But… it is a good idea to try booting off of the new disks to make sure everything is working.
# reboot

Detach the old disks.
# zpool detach rpool c0t0d0s0
# zpool detach rpool c0t1d0s0

# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t2d0s0 ONLINE 0 0 0
c0t3d0s0 ONLINE 0 0 0
errors: No known data errors
Notice the old disks are gone from the pool.

If the new disk had been larger than the old disk, the pool would have instantly grown after detaching of the last of the old disks.

Please note that there is currently no way to salvage any data from a device that has been detached from a ZFS pool.

This means that "splitting mirrors" does not provide a rollback patch when patching. But, ZFS does give us clones and snapshots which I will discuss in a future ZFS tip.

Read the rest of this entry...

Bookmark and Share My Zimbio

Tuesday, July 21, 2009

ZFS boot/root - intro

I think on the whole, you will find that ZFS booting is very simple. It should make your lives easier.

Also, I believe that our goal of implementing SAN booting as a standard will be much easier to achieve if we first adopt ZFS root file systems as a prerequisite standard.

Here are the basic rules, requirements, and recommendations around ZFS booting:
• ZFS boot is supported with Solaris 10 10/08 (update 6) and later versions
• No OBP upgrades are required
• Disks must be partitioned with SMI labels (this is the same for both UFS and ZFS booting).
• A ZFS root pool must contain only one VDEV. The VDEV can be mirrored (with two or more disks) but Raid-Z is not supported.
• It is recommended that ZFS root pools reside on slice 0 of the boot disk(s).
• A ZFS root pool must be large enough to accommodate the OS, as well as independent datasets for SWAP, DUMP, and /home
• A ZFS root pool should normally be named "rpool" but this is not an absolute requirement.

Here is an example of a 72GB root pool built on a pair of 72GB disks.
In this example slice 0 on both disks is sized to make use of all available space on the disks.

|72GB root pool named "rpool" - contains /, /var, /home, swap, & dump |
| |
| --------------------------------------------------------------------- |
| |72GB mirrored VDEV | |
| | | |
| | ---------------------------- ---------------------------- | |
| | | 72GB "Slice 0" on disk#1 | | 72GB "Slice 0" on disk#2 | | |
| | | | : | | | |
| | | | : | | | |
| | | 1st mirror | : | 2nd mirror | | |
| | | | | | | |

I would encourage you to try jumpstarting a system with ZFS boot enabled, and then run a few commands to see how things are set up:

ok> boot net - install
……….. wait while the OS installs ……...

# df -k | grep rpool
rpool/ROOT/blue 51351552 1898362 39609380 5% /
rpool/ROOT/blue/var 51351552 29578 39609380 1% /var
rpool/home 1048576 437 1048138 1% /home
rpool/marimba 524288 135049 389239 26% /opt/Marimba
rpool/bmc 2097152 235035 1862117 12% /opt/bmc
rpool 51351552 93 39609380 1% /rpool
rpool/openv 524288 5551 518737 2% /usr/openv

# zpool status
pool: rpool
state: ONLINE
scrub: none requested
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t0d0s0 ONLINE 0 0 0
c0t1d0s0 ONLINE 0 0 0
errors: No known data errors

# zfs list -o name,quota,volsize,mountpoint
rpool none - /rpool
rpool/ROOT none - legacy
rpool/ROOT/blue none - /
rpool/ROOT/blue/var none - /var
rpool/bmc 2G - /opt/bmc
rpool/dump - 1G -
rpool/home 1G - /home
rpool/marimba 512M - /opt/Marimba
rpool/openv 512M - /usr/openv
rpool/swap - 8G -
Notice that the pool named "tools" is no longer created; everything that was in the tools pool is now placed in "rpool".

# dumpadm
Dump content: kernel pages
Dump device: /dev/zvol/dsk/rpool/dump (dedicated)
Savecore directory: /var/crash/gnat
Savecore enabled: yes

# swap -l
swapfile dev swaplo blocks free
/dev/zvol/dsk/rpool/swap 256,1 16 33030128 33030128

The size of the swap device is based on the formula : 2 * memory with upper and lower bounds that are based on the size of the disk.

# dumpadm
Dump content: kernel pages
Dump device: /dev/zvol/dsk/rpool/dump (dedicated)
Savecore directory: /var/crash/gnat
Savecore enabled: yes

Notice that swap device and the dump device are independent zvols. Unfortunately they are not allowed to share the same dataset.

For now, the size of the dump device is based on a default Sun formula that is built into the Solaris installation program; I may override the Sun formula in the near future if we start finding that the our dump devices are too small.

# cat /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/zvol/dsk/rpool/swap - - swap - no -
/devices - /devices devfs - no -
sharefs - /etc/dfs/sharetab sharefs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -

The vfstab no longer contains entries for /, /var, or /home, but there is still an entry for swap.

# echo ver | format c0t0d0
Volume name = < >
ascii name = <SUN72G cyl 14087 alt 2 hd 24 sec 424>
pcyl = 14089
ncyl = 14087
acyl = 2
nhead = 24
nsect = 424
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 14083 68.34GB (14084/0/0) 143318784
1 unassigned wu 0 0 (0/0/0) 0
2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wu 0 0 (0/0/0) 0
4 unassigned wu 0 0 (0/0/0) 0
5 unassigned wu 0 0 (0/0/0) 0
6 unassigned wu 0 0 (0/0/0) 0
7 unassigned wu 0 0 (0/0/0) 0

That is all for today…

I would encourage you sit down on the couch tonight, turn off your favourite distractions, and start the video on your laptop.

Read the rest of this entry...

Bookmark and Share My Zimbio

Monday, July 20, 2009

awk notes

Awk is an excellent filter and report writer. Many UNIX utilities generates rows and columns of information. AWK is an excellent tool for processing these rows and columns, and is easier to use AWK than most conventional programming languages. AWK also has string manipulation functions, so it can search for particular strings and modify the output. AWK also has associative arrays, which are incredible useful, and is a feature most computing languages lack. Associative arrays can make a complex problem a trivial exercise.

I've frequently used awk with a variety UNIX commands to makelife easier for me, here are some one-liners that may prove helpfull.

Print column1, column5 and column7 of a data file or output of any columns list

awk '{print $1, $5, $7}' data_file

cat file_name |awk '{print $1 $5 $7}'

ls -al |awk '{print $1, $5, $7}' -- Prints file_permissions,size and date

List all files names whose file size greater than zero.

ls -al |awk '$5 > 0 {print $9}'

List all files whose file size equal to 512bytes.

ls -al |awk '$5 == 0 {print $9}'

print all lines

awk '{print }' file_name

awk '{print 0}' file_name

Number of lines in a file

awk ' END {print NR}' file_name

Number of columns in each row of a file

awk '{print NF}' file_name

Sort the output of file and eliminate duplicate rows

awk '{print $1, $5, $7}' |sort -u

List all file names whose file size is greater than 512bytes and owner is -oracle-

ls -al |awk '$3 == "oracle" && $5 > 512 {print $9}'

List all file names whose owner could be either -oracle- or -root-

ls -al |awk '$3 == "oracle" || $3 == "root" {print $9}'

list all the files whose owner is not -oracle

ls -al |awk '$3 != "oracle" {print $9}'

List all lines which has at least one or more characters

awk 'NF > 0 {print }' file_name

List all lines longer that 50 characters

awk 'length($0) > 50 {print }' file_name

List first two columns

awk '{print $1, $2}' file_name

Swap first two columns of a file and print

awk '{temp = $1; $1 = $2; $2 = temp; print }' file_name

Replace first column as -ORACLE- in a data file

awk '{$1 = "ORACLE"; print }' data_file

Remove first column values in a data file

awk '{$1 =""; print }' data_file

Calculate total size of a directory in Mb

ls -al |awk '{total +=$5};END {print "Total size: " total/1024/1024 " Mb"}'

Calculate total size of a directory including sub directories in Mb

ls -lR |awk '{total +=$5};END {print "Total size: " total/1024/1024 " Mb"}'

Find largest file in a directory including sub directories

ls -lR |awk '{print $5 "\t" $9}' |sort -n |tail -1

Read the rest of this entry...

Bookmark and Share My Zimbio

Sunday, July 19, 2009

Uninstall Solaris Patch Cluster

We have been doing some patch cluster installs recently in our group and I installed the wrong patch cluster on the server (installed the June patch cluster). The patch cluster that was supposed to be installed on the server was the April one.

Unlike Windows server were it will be relatively easier to rollback an update it's quite tricky for Solaris or ist it?

How did I back-out/uninstall the patch cluster, well first you have to look at the patch_order. Normal patching will go through this list from top to bottom, since we have to "unistall" the patch cluster we need to go through the list in reverse.

Now there's two way to do this the hard way or the easy way :).

The hard way would be to run patchrm on each of the files...with 100+ files highly unlike hehe.

The easy way is to loop through the list in reverse and do patchrm via script. ( now were talkking).

Here's the impromptu script I used to backout the patch cluster:

for i in `tail -r patch_order | awk '{print $1}'`
patchrm $i

Nothing fancy, just put it in a file, make it executable and you're good to go.

Read the rest of this entry...

Bookmark and Share My Zimbio

Saturday, July 18, 2009 pays to socialize.

I was recently invited to try out this site by an online friend and I decided to sign-up and take a peek at what the site offers. I'm not really into social network stuff but if you are or just was to also look around and "test" it then check the site out.

If you are into social networking (myspace, Twitter, etc) and you haven’t checked out Yuwie, you may want to. Yuwie is pretty much like myspace, but the difference is that you get paid to use it. Although not the most amazingly profitable site, it’s still free money (with the global recession and all).

Since Yuwie opened in 2007, they have recently made a lot of changes in effort to be more like myspace. You can now “own friends” and participate in other various activities “myspace-style”.

Payments are made by Pay Pal. The site is pretty easy to use and worth the free registration. check out Yuwie here.

Read the rest of this entry...

Bookmark and Share My Zimbio

Friday, July 17, 2009

Solaris Zones Notes

1. Virtualization - i.e. VMWare
2. Solaris Zones can host only instances of Solaris. Not other OSs.
3. Limit of 8192 zones per Solaris host
4. Primary zone(global) has access to ALL zones
5. Non-global zones, do NOT have access to other non-global zones
6. Default non-global zones derive packages from global zone
7. Program isolation - zone1(Apache), zone2(MySQL)
8. Provides 'z' commands to manage zones: zlogin, zonename, zoneadm,zonecfg

###Features of GLOBAL zone###
1. Solaris ALWAYS boots(cold/warm) to the global zone
2. Knows about ALL hardware devices attached to the system
3. Knows about ALL non-global zones

###Features of NON-GLOBAL zones###
1. Installed at a location on the filesystem of the GLOBAL zone 'zone root path' /export/home/zones/{zone1,zone2,zone3,...}
2. Share packages with GLOBAL zone
3. Manage distinct hostname and tables files
4. Cannot communicate with other non-global zones by default. NIC must be used, which means, use standard network API(TCP)
5. GLOBAL zone admin. can delegate non-global zone administration

###Zone Configuration###
Use: zonecfg - to configure zones
Note: zonecfg can be run: interactively, non-interactively, command-file modes

Requirements for non-global zones:
1. hostname
2. zone root path. i.e. /export/home/zones/testzone1
3. IP address - bound to logical or physical interface

Zone Types:
1. Sparse Root Zones - share key files with global zone
2. Whole Root Zones - require more storage

Steps for configuring non-global zone:
1. mkdir /export/home/zones/testzone1 && chmod 700 /export/home/zones/testzone1
2. zonecfg -z testzone1
3. create
4. set zonepath=/export/home/zones/testzone1 - sets root of zone
5. add net ; set address=
6. set physical=e1000g0
7. (optional) set autoboot=true - testzone1 will be started when system boots
8. (optional) add attr ; set name=comment; set type=string; set value="TestZone1"
9. verify zone - verifies zone for errors
10. commit changes - commit

11. Zone Installation - zoneadm -z testzone1 install - places zone, 'testzone1' into 'installed' state. NOT ready for production
12. zoneadm -z testzone1 boot - boots the zone, changing its state

###Zlogin - is used to login to zones###
Note: each non-global zone maintains a console. Use 'zlogin -C zonename' after installing zone to complete zone configuration

Note: Zlogin permits login to non-global zone via the following:
1. Interactive - i.e. zlogin -l username zonename
2. Non-interactive - zlogin options command
3. Console mode - zlogin -C zonename
4. Safe mode - zlogin -S

zoneadm -z testzone1 reboot - reboots the zone
zlogin testzone1 shutdown

Read the rest of this entry...

Bookmark and Share My Zimbio

Thursday, July 16, 2009

SYSLOG Implementation Notes

Note: Syslog is the default logging handler/router in Solaris
Note: Defaults to UDP:514
Note: Segment your Syslog Host(s) on a distinct subnet, protected by ACLs

pkgchk -lP /usr/sbin/syslogd

Syslog can log to the following locations:
1. remote host
2. local file (Suggested destination because of I/O performance)
3. console
4. specific users
5. *

Note: Syslog processes 3 pieces information represented by 2 fields:
/etc/syslog.conf - primary configuration file for Syslog
man syslog.conf

1: selector(*.emerg) 2: action(/dev/console)
*.emerg /dev/console
Selector = facility(user).severity_level(debug)
Action = target for log entry (files, console, remote host)

###Syslog Recognized Facilities###
,LOCAL0-7(provides 8 usable facilities),MARK,*

### 8 Syslog Recognized Severity Levels###
1. EMERG - yields least output
8. DEBUG - yields most output

Note: restart syslog after changing /etc/syslog.conf /var/log/ciscofirewall1.log
touch /var/log/ciscofirewall1.log
svcadm restart system-log

###Log Rotation using logadm###
which logadm
pkgchk -lP /usr/sbin/logadmd - member of SUNWcsu
logadm is configured to run daily in root's crontab
crontab -l

/etc/logadm.conf - default configuration file
Note: don't memorize all parameters. Execute 'logadm -h'
Note: command-line directives override /etc/logadm.conf directives

Note: logadm preserves 10 backups of log files named logname.0-.9
Note: logadm supports shell wildcards '*', '?'

Read the rest of this entry...

Bookmark and Share My Zimbio

Wednesday, July 15, 2009

Snort NIDS

1. Packet Capturing - libpcap.a(
2. Packet Logging - Captures are stored to disk (ASCII/TCPDump Formats)
3. Network Intrusion Detection Mode

Note: Software Companion DVD includes Snort 2.0(older version)

1. libpcap
2. libpcre

###Configuring Snort###
./configure --with-libpcap-libraries=/opt/sfw/lib --with-libpcre-includes=/opt/sfw/include --with-libpcre-libraries=/opt/sfw/lib

Appended to PATH: /usr/sfw/bin:/usr/ccs/bin

make install

###Snort as a Sniffer###
snort -v - Dumps link headers(Layers 3(IPs) & 4(Ports) of the OSI Model)
snort -v -i e1000g0
snort -vd -i e1000g0 - Dumps Application Layer (Layer-7 of OSI Model)
snort -ve -i e1000g0 - Dumps data-link layer (Layer-2 of OSI Model)
snort -vde -i e1000g0 - Dumps Layers 2,3,4,7 of OSI Model

###Snort as a Packet Logger###
Note: Identical to sniffer, except, data is directed to file. Improves I/O.
snort -L snortlog.1
Note: Snort defaults to '/var/log/snort' to store binary log and alert file

snort -L snortlog.1 -l ./log

Note: Snort supports TCPDump's Boolean primitives and operators.
Additionally, Snort support Berkeley Packet Filters (BPFs)
snort options BPFs

Read the rest of this entry...

Bookmark and Share My Zimbio

Tuesday, July 14, 2009

Snoop Notes

1. Packet capturing facilities (ALL levels of OSI model, minus physical)
2. Packet playback/replay facility
3. Sniffs on first detected, non-loopback interface - output to STDOUT
4. MUST be executed as root

Note: Try to snoop to output of file as opposed to STDOUT for performance reasons (to minimize packet loss)

snoop -o snoop1.out - redirects captured traffic to file named 'snoop1.out'
and returns a packet-count to STDOUT

Note: If connected to a switched environment, MIRROR the traffic to the Sun box in order for traffic to be available to snoop

snoop -i snoop1.out - reads the captured files
Note: snoop captures packets until killed with CTRL-C or disk runs out of space

snoop -i snoop1.out -p 11573,11577 - extracts packet ranges 11573-11577
snoop -v -i snoop1.out - VERBOSE (ALL OSI layers, 2-7)
snoop -V -i snoop1.out - SUMMARY (Returns interesting packet payload)

Note: snoop supports Boolean primitivies (host,tcp,udp,ip) & Boolean operators (AND,OR,NOT)

snoop -i snoop1.out tcp port 80

Note: snoop -o output_file - captures layers 2-7

snoop -o snoop1.out udp

snoop -o snoop1.out

###FTP Traffic Snoop###
snoop -o snoop_ftp_traffic.out host linuxcbtsun1 and tcp and port 21


Packet Capturing - captures packets from network interfaces

Note: 2 major utilities supporting TCPDump's format include:
1. Ethereal - GUI protocol analyzer/Sniffer
2. Snort NIDS - Sniffer/Logger/NIDS

TCPDump supports 3 qualifiers to assist in creating expressions:
1. Type - host|net|port i.e. host
2. Direction - src|dst|src or dst|src and dst
3. Protocol - tcp|udp|ip

tcpdump options expression

tcpdump -D - returns available interfaces
tcpdump -i interface_name - binds to specific interface
tcpdump -q suppresses some packet header information
tcpdump -n - avoids name resoltion - improves performance

Read the rest of this entry...

Bookmark and Share My Zimbio

Monday, July 13, 2009

Sendmail MTA Features

Default configuration runs Sendmail
Runs as 2 daemons
1. queue runner - submits jobs into queue(PHP script/mailx/sendmail/etc.)
a. it runs as a non-privileged user called 'smmsp'
b. places messages into queue directory: /var/spool/mqueue
c. mailq command dumps the current status of the queue(s)

2. MTA mode - message delivery to local/remote recipients
b. it runs as root - to bind to well-known TCP:25

Note: Sendmail works with SMF
svcadm restart sendmail
svcs -l sendmail

Typical Mail Components in distributed mail environments:
1. MTA - Message Transfer Agent (Sendmail/Postfix/qmail)
2. MUA - Mail User Agent (mail, mutt, mailx, MS Outlook, Eudora, etc.)
3. MDA - Mail Delivery Agent (mail.local, procmail, etc.)

Config files:
1. /etc/mail/ - primary config file for Sendmail MTA
2. /etc/mail/ - primary config file for Sendmail MSP (smmsp)

Config files macros using m4 language:
1. /etc/mail/cf/cf/
2. /etc/mail/cf/cf/

Note: Sendmail does NOT understand m4 files. Use m4 to generate updated .cf files if necessary

####/etc/aliases - used for local mail delivery###
Contains key aliases for 'postmaster' & system daemons



newaliases - generates updated DB for aliases

###per-user mail###
1. Sendmail stores mail using the older mbox format, which stores all mail in 1 potentially huge ASCII text files
2. /var/mail/username - flagged with the STICKY bit

###Mail delivery using local tools###
sendmail is monolothic - 1 program does it all (client/server/MSP/MTA)

sendmail -v unixcbt

Note: MSP submits to: /var/spool/clientmqueue

###Virtual Domains/Users Support###

Virtual Users:
Create: /etc/mail/virtusertable
Populate with mappings: virtual_email_address local_mailbox|remote_email
unixcbt@unixcbt.internal unixcbt

Configure /etc/mail/ via /etc/mail/cf/cf/
- FEATURE(`virtusertable',`hash -o /etc/mail/virtusertable.db')
makemap hash virtusertable - creates the DB file:

###Relay Domains###
Houses domains that sendmail should relay; local and/or remote

###IMAP/POP2|3 Support###
Differences between IMAP & POP
1. IMAP stores messages on server
2. POP downloads messages to client

Note: IMAP server must support mbox mail storage format and optionally Maildir mail storage format

Download IMAP2004g from

###Configure INETD control of IMAP & POP3 services###
pop3 stream tcp nowait root /usr/local/sbin/ipop3d ipop3d
imap stream tcp nowait root /usr/local/sbin/imapd imapd

Note: use 'inetconv' to convert INETD entries in /etc/inetd.conf to SMF

###Evolution MUA - Connect to POP3 & IMAP Service###
Installed openssl-0.9.8 to support IMAP2004g
Configure Evolution
Note: Retrieving & Sending messages are distinct functions
1. SMTP - Sending
2. IMAP/POP3/MS Exchange/etc. - Retrieval

Read the rest of this entry...

Bookmark and Share My Zimbio

Sunday, July 12, 2009

System Security Notes

/var/adm/sulog - houses SU attempts
SU TIMESTAMP +||- TTY Switched_User_From_To
SU 06/17 11:13 + pts/4 root-unixcbt

/var/adm/loginlog - Does NOT exist by default
NOte: houses failed logins after threshold(Default of 5)
touch /var/adm/loginlog

logins command
logins -x -l unixcbt - returns info. from /etc/{passwd,shadow}
logins -p - lists users without passwords

###Password Generation Encryption Algorithm###
Note: Default in Solaris 10 is UNIX, legacy encryption - The weakest
/etc/security/policy.conf - man policy.conf(4)
Note: password encryption changes take effect at user's next password change

Put the rest of the entry here.

Read the rest of this entry...

Bookmark and Share My Zimbio

Saturday, July 11, 2009

Samba Notes

Integrates Unix-type systems with Windows
SMB(139)/CIFS(445) - 2 protocols used to communicate with Windows/Samba servers

Key Client Utilities:
1. smbtree - network neighborhood text utility
It enumerates workgroups, hosts & shares
smbtree -b - relies upon broadcasts for resolving workgroups/hosts
smbtree -D - echoes discovered workgroups using broadcasts/master browser

2. smbclient - provides an FTP-like interface to SMB/CIFS servers
smbclient service_name(//LINUXCBTWIN1/LinuxCBT)

Note: Most, if not all, Samba clients operate in case-insensitive mode
smbclient //linuxcbtwin1/linuxcbt
Note: when in smbclient interactive mode, prefix commands with '!' to execute locally on client, otherwise commands run on server

smbclient -L linuxcbtwin1 - enumerates the shares on the server\

smbclient -A ./.smbpaswd //linuxcbtwin1/solaris10


3. smbtar - facilitates backups of remote shares
smbtar -s linuxcbtwin1 -x solaris10 -t solaris10.tar - backup
smbtar -s linuxcbtwin1 -x solaris10 -r -t solaris10.tar - restore

###Remote Desktop Installation ###
Requirements -
1. libiconv
2. libgcc 3.3.2 or higher
3. libopenssl 0.9.7
4. rdesktop-1.4.1

Features RDesktop support for Remote Desktop Protocol (RDP) versions 4 & 5
Connects to:
1. Windows XP - RDP-5
2. Windows 2000 - RDP-5
3. Windows 2003 - RDP-5
4. Windows NT Server 4 - Terminal Services Edition - RDP-4


rdesktop -g 700x500 -a 16 server_name(

###Samba Server Configuration###
/etc/sfw/smb.conf-example - modify & save as /etc/sfw/smb.conf

smb.conf - is the main configuration file for Samba server & many of the Samba clients search for key directives from the file.

1. File & Print sharing
2. Implemented as 2 daemons (smbd & nmbd)
smbd - file & print sharing - connections based on SMB/CIFS protocols
SMB - TCP 139
CIFS - TCP 445
nmbd - handles NETBIOS names using primarily UDP connectivity
Browse list (master browser or derive current list from master browser)
Names of servers - derived using broadcast or WINS
UDP 137 & 138
3. Legacy service - does not currently benefit from SMF
4. Service is located in: /etc/init.d & referenced via run-levels
5. Configuration changes to /etc/sfw/smb.conf are read automatically

###Samba Security Modes###
Default = security = user - relies upon local Unix accounts database & Samba database to grant or deny access to shared resources
1. /etc/passwd
2. /etc/sfw/smbpasswd - handles translation of Windows auth to Unix auth
3. /etc/sfw/smbusers - provides translation between Unix & Windows users
i.e. translation of Windows' 'guest' user to Unix' 'nobody' user

###User Authentication Mode###
Note: NETBIOS names are restricted to 16 characters, however, 15 characters are configurable
linuxcbtsun1.linuxcbt.internal = FQDN
Note: smbpasswd -a unixcbt - create permitted samba users in /etc/sfw/private/smbpasswd file - otherwise, access will be denied

###Samba Web Administration Tool (SWAT)###
Steps to enable Swat:
1. create an /etc/services entry for SWAT - TCP:901
2. create an /etc/inetd.conf entry for SWAT
swat stream tcp nowait root /usr/sfw/sbin/swat swat
3. Convert the inetd entry for SWAT to SMF using 'inetconv'

Read the rest of this entry...

Bookmark and Share My Zimbio

Friday, July 10, 2009

Quotas implementation and management

Soft Limits - function as stage-1 or warning stage
- if user exceeds soft limit, timer is invoked (default 7-days)
i.e. 100MB - if user exceeds beyond timer, soft limit becomes hard limit

Hard Limits - functions as a storage ceiling - CANNOT be exceeded
- if user meets hard limit, system will not allocate additional storage

File-system perspective of quotas:
2 objects are monitored:

FILE(test.txt) -> 1-INODE -> 1-or-more Data BLOCKS(default 1K)

Quota Tools:
1. edquota - facilitates the creation of quotas for users
2. quotacheck - checks for consistency in usage and quota policy
3. quotaon - enables quotas on file system
4. repquota - displays quota information

###Steps to enable quota support###
1. modify /etc/vfstab - enable quotas per file system
"Mount Options" column - 'rq'
2. create empty 'quotas' file in root of desired file system
touch /export/home/quotas && chmod 600 /export/home/quotas
3. edquota unixcbt
edquota -p unixcbt unixcbt2 unixcbt3 unixcbt4 - copies unixcbt's quota policy to users unixcbt2,3,4
4. quotacheck -va
5. quota -v unixcbt
6. quotaon -v /dev/dsk/c0t0d0s7 -enable quota support

Read the rest of this entry...

Bookmark and Share My Zimbio

Thursday, July 9, 2009

Solaris Patch Cluster Error Codes

I find it really annoying to install Sun's patch cluster and stare at the screen to see some of those error codes the would look like something like these:

Installing 118666-20...
Installation of 118666-20 succeeded. Return code 0.
Installing 140455-01...
Installation of 140455-01 failed. Return code 8.
Installing 120094-22...
Installation of 120094-22 failed. Return code 8.
Installing 139943-01...
Installation of 139943-01 failed. Return code 1.
Installing 121211-02...
Installation of 121211-02 failed. Return code 1.
Installing 119986-03...
Installation of 119986-03 failed. Return code 1.
Installing 120543-14...
Installation of 120543-14 failed. Return code 8.
Installing 126440-01...
Installation of 126440-01 succeeded. Return code 0.
Installing 123590-10...
Installation of 123590-10 failed. Return code 8.
Installing 139608-02...
Installation of 139608-02 succeeded. Return code 0.
Installing 119081-25...
Installation of 119081-25 failed. Return code 1.
Installing 138322-03...
Installation of 138322-03 succeeded. Return code 0.

You'll never have a clue what the heck they would mean until you visit the log.

Since this kin of drives me nuts I decided to look for a summary of what this error codes mean, and here it it. Not really sure if these are updated though.

Here they are:

Exit code Meaning
0 No error
1 Usage error
2 Attempt to apply a patch that’s already been applied
3 Effective UID is not root
4 Attempt to save original files failed
5 pkgadd failed
6 Patch is obsoleted
7 Invalid package directory
8 Attempting to patch a package that is not installed
9 Cannot access /usr/sbin/pkgadd (client problem)
10 Package validation errors
11 Error adding patch to root template
12 Patch script terminated due to signal
13 Symbolic link included in patch
15 The prepatch script had a return code other than 0.
16 The postpatch script had a return code other than 0.
17 Mismatch of the -d option between a previous patch install and the current one.
18 Not enough space in the file systems that are targets of the patch.
19 $SOFTINFO/INST_RELEASE file not found
20 A direct instance patch was required but not found
21 The required patches have not been installed on the manager
22 A progressive instance patch was required but not found
23 A restricted patch is already applied to the package
24 An incompatible patch is applied
25 A required patch is not applied
26 The user specified backout data can’t be found
27 The relative directory supplied can’t be found
28 A pkginfo file is corrupt or missing
29 Bad patch ID format
30 Dryrun failure(s)
31 Path given for -C option is invalid
32 Must be running Solaris 2.6 or greater
33 Bad formatted patch file or patch file not found
34 The appropriate kernel jumbo patch needs to be installed

Read the rest of this entry...

Bookmark and Share My Zimbio

Tuesday, July 7, 2009

Network Time Protocol (NTP) Notes

Synchronizes the local system and can be configured to synch any NTP-aware host

Hierarchical in design - 1 through 16 strata
Lower stratum values are more accurate time sources
Stratum 1 servers are connected to external, more accurate time sources such as GPS

Note: Less latency usually results in more accurate time

External Time Source(GPS/Radio/etc.)
-NTP - Stratum 1
-NTP Stratum 2 - Solaris Client/Server
Note: A Solaris 10 NTP system can be both client & server

Note: configure NTP clients to synch to 3 or more clocks(time sources)

###Client configuration###
xntpd or the ntp service searches for /etc/inet/ntp.conf

Note: NTP uses UDP 123 in source & destination ports

ntpdate ntp_server - synchronizes, one-off, local clock
Note: ntpdate does NOT update local clock if xntpd is running locally

rdate - relies upon older time service

ntpq - NTP query utility runs interactively & non-interactively
ntpq -np - lists peers without name resolution - non-interactive invocation
ntpq - invokes interactive mode

ntptrace - traces path to time source

ntpq - queries local or remote NTP servers
ntptrace - traces path to external time source
ntpdate - updates local clock
/etc/inet/ntp.conf - (server server_ip)
svcadm enable ntp - starts NTP (Server and/or Client)

NTP Pool Site: (Derive NTP public servers from their lists)

Read the rest of this entry...

Bookmark and Share My Zimbio

Monday, July 6, 2009

Network Mapper Nmap Notes

Nmap ("Network Mapper") is a free and open source (license) utility for network exploration or security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large networks, but works fine against single hosts. Nmap runs on all major computer operating systems, and both console and graphical versions are available.

Performs network reconnaissance/vulnerability testing

Compilation Instructions:
1. export PATH=$PATH:/usr/ccs/bin
2. ./configure
3. make || gmake
4. gmake install - copies nmap to /usr/local/bin

Note: nmap can be run by any user on the system, however, only root, may perform more dangerous functions. i.e. SYN-based scans

###Check ports of hosts###
nmap -v as root, causes a SYN-based scan to occur:
SYN -> SYN-ACK -> Termination
SYN -> SYN-ACK -> ACK - TCP-based scan performed by normal users

Nmap can export to the following file types:
1. Normal
2. XML
3. Greppable

Read the rest of this entry...

Bookmark and Share My Zimbio

Sunday, July 5, 2009

Network File System(NFS) Notes

Implemented by most if not all nix-type OSs(Solaris/AIX/Linux/FreeBSD)
NFS seamlessly mounts remote file systems locally

NFS Components include:
1. NFS Client (mount(temporary access), /etc/vfstab)
2. NFS Server
3. AutoFS

NFS versions 3 & higher supports large files (>2GB)

NFS Major versions:
2 - original
3 - improved upon version 2
4 - current version

Note: Solaris 10 simultaneously supports ALL NFS versions
/etc/default/nfs - contains defaults for NFS server & client

Note: client->server NFS connection involves negotiation of NFS version to use

###Steps for mounting remote file systems###
1. ensure that a local mount point exists & is empty
Note: local mount points with files and/or directories will be unavailable while a remote file system is locally-mounted

2. ensure that NFS server is available and sharing directories

3. mount locally the remote file system.
mount -F nfs -o ro linuxcbtmedia:/tempnfs1 /tempnfs1
Note: use 'man mount' to determine mount options for various FSs

4. setup persistent mounts in /etc/vfstab file

###Steps for sharing local file systems locations###
1. ensure that NFS is running
svcs -a | grep -i nfs
Note: you may enable the NFS server and update share information independently

Start using: svcadm svc:network/nfs/server
Note: NFS Server will NOT start if there are NO directories to share

2. share -F nfs -d test_share /tempnfssun1 - exports for current session. Does NOT persist across reboots

3. Configure NFS sharing for persistence, using share command

share -F nfs -d test_share /tempnfssun1

Note: consult 'man share_nfs' for permissions info.

1. Just-in-time mounting of file systems
2. Controlled by 'automountd' daemon
3. Managed via autofs service
4. References map files to determine file systems to mount
5. Obviates need to distribute root password to non-privileged users

/etc/default/autofs - contains configuration directives for autofs

###AutoFS Maps###
3 Types:
1. Master map - /etc/auto_master
2. Direct map - /etc/auto_direct - facilitates direct mappings
3. Indirect map - /etc/auto_* - referenced from /etc/auto_master

Note: /etc/auto_master is always read by autofs(automountd daemon)
/etc/nsswitch.conf - used to determine lookup location for automount

-hosts - references hosts defined in /etc/hosts & the hosts MUST export shares using NFS

Note: changes to /etc/auto_master(primary autofs policy file) usually requires a service restart: svcadm restart autofs

Note: AutoFS defaults to permitting client to browse potential mount points

###Direct mapping example###
Note: Direct mappings seamlessly merge remote exports with local directories
1. create auto_direct mapping in /etc/auto_master:
/- auto_direct -vers=3

Read the rest of this entry...

Bookmark and Share My Zimbio

Saturday, July 4, 2009

Network Configuration Overview

1. Local Files Mode - config is defined statically via key files
2. Network Client Mode - DHCP is used to auto-config interface(s)

Current Dell PE server has 3 NICs:
1. e1000g0 - plumbed (configured for network client mode)
2. iprb0 - unplumbed
3. iprb1 - unplumbed

1-Virtual Mandatory interface lo0 - loopback

Determine physical interfaces using 'dladm show-dev | show-link'
Determine plumbed and loopback interfaces using 'ifconfig -a'

NIC naming within Solaris OS: i.e. e1000g0 - e1000g(driver name) 0(instance)

Layers 2 & 3 info. - ifconfig -a, or ifconfig e1000g0
Layer 1 info. - dladm show-dev | show-link

###Key network configuration files###
svcs -a | grep physical
svcs -a | grep loopback

1. IP Address - /etc/hostname.e1000g0, /etc/hostname.iprb0 | iprb1
2. Domain name - /etc/defaultdomain - linuxcbt.internal
3. Netmask - /etc/inet/netmasks -
4. Hosts database - /etc/hosts, /etc/inet/hosts - loopback & ALL interfaces
5. Client DNS resolver file - /etc/resolv.conf
6. Default Gateway - /etc/defaultrouter -,,
7. Node name - /etc/nodename
Name service configuration file - /etc/nsswitch.conf

netstat -D - returns DHCP configuration for ALL interfaces
ifconfig -a - returns configuration for ALL interfaces

Reboot system after transitioning from network client(DHCP) mode to local files(Static) mode

mv dhcp.e1000g0 to some other name or remove the file so that the DHCP agent is NOT invoked
echo "linuxcbtsun1" > /etc/nodename

###Plumb/enable the iprb0 100Mb/s interface###
Plumbing interfaces is analagous to enabling interfaces
Note: is a Linux host waiting to communicate with iprb0 interface
1. ifconfig iprb0 plumb up - this will enable iprb0 interface
2. ifconfig iprb0 netmask - this will enable layer-3 IPv4 address

Steps to UNplumb an interface:
1. ifconfig iprb0 unplumb down

###Ensure that newly-plumbed interface settings persists across reboots###
Steps include updating/creating the following files:
1. echo "" > /etc/hostname.iprb0
2. create entry in /etc/hosts - linuxcbtsun1
3. echo "" >> /etc/inet/netmasks

Note: To down interface, execute:
ifconfig interface_name down
ifconfig iprb0 down && ifconfig iprb0

###Sub-interfaces/Logical Interfaces###
e1000g0(physical interface) - Apache website) Apache website) for SSH)

iprb0 -

Use 'ifconfig interface_name addif ip_address '
ifconfig e1000g0 addif (RFC-1918 - defaults /24)

Note: This will automatically create an 'e1000g0:1' logical interface
Note: Solaris places new logical interface in DOWN mode by default
Note: use 'ifconfig e1000g0:1 up' to bring the interface up

Note: logical/sub-interfaces are contingent upon physical interfaces
Note: if physical interface is down, so will the logical interface(s)
Note: connections are sourced using IP address of physical interface

###Save logical/sub-interface configuration for persistence across reboots###

1. gedit /etc/hostname.e1000g0:1 -
2. gedit /etc/hostname.e1000g0:2 -
3. Optionally update /etc/hosts - /etc/inet/hosts
4. Optionally update /etc/inet/netmasks - when subnetting

Note: To remove logical interface execute the following:
ifconfig physical_interface_name removeif ip_address
ifconfig iprb0 removeif

###/etc/nsswitch.conf - name service configuration information ###
functions as a policy/rules file for various resolution:
1. DNS
2. passwd(/etc/passwd,/etc/shadow),group(/etc/group)
3. protocols(/etc/inet/protocols)
4. ethers or mac-to-IP mappings
5. hosts - where to look for hostname resolution: files(/etc/hosts) dns(/etc/resolv.conf)

Read the rest of this entry...

Bookmark and Share My Zimbio

Friday, July 3, 2009

Could not get the signature for domain X

When showplatform gives out the error:

"Could not get the signature for domain X"

% showplatform
Q - - Powered Off
R - - Powered Off
Could not get the signature for domain B
Could not get the signature for domain C
Could not get the signature for domain D

- Troubleshooting:

'showplatform' examines the PCD for domain state. If the PCD
indicates the keyswitch position is ON|DIAG|SECURE, 'showplatform'
attempts to access the domain's Golden IOSRAM to determine the
domain signauture.

- Resolution:

The more common cause of this error is that the PCD does not
accurately reflect the true state of the platform. See "Background
information" for some scenarios that can result in PCD discrepancy.

To correct the discrepancy, perform a 'setkeyswitch standby' followed
by a 'setkeyswitch off' for all domains that report the signature
error. Answer 'y' to any queries.

In cases of extreme PCD corruption, setkeyswitch operations may
not succeed. If this occurs, the PCD for the effected domain(s)
can be returned to defaults. If setkeyswitch is not successful,
do the following to clean up the PCD. Note this will null the
board assignments and available component lists (ACLs) for the
effected domains.

- Note the configuration of affected domain(s). Include board
assignments, ACLs, etc.
- Issue 'setdefaults -d X -p', where X is the domain [A..R]. The
-p preserves the NVRAM settings.
- Reassign boards to the domain(s) with 'addboard' and setup any
ACLs with 'setupplatform'.

- Summary of part number and patch ID's

- References and bug IDs

- Additional background information:

Two scenarios that can lead to the above situation are:

1. Incomplete shutdown of domains prior to a platform poweroff

An 'init 0' is done to the domain, so the domain is down. But, the
PCD still has the keyswitch as ON|DIAG|SECURE. Then the SCs are
shutdown and the platform powered off. When power is restored to
the platform, the PCD doesn't reflect the powered off state of the

This is different from a total power loss because SMS is shutdown
gracefully. There is no indicator to SMS to indicate that a power
recovery is needed, so domains the PCD lists as ON|DIAG|SECURE are
taken at face value.

2. Restoration of an old/stale smsbackup file

Similar logic to above, but even if the domains were appropriately
sekeyswitched OFF, a stale smsbackup file can restore a PCD that has
incorrect keyswitch states.

Also of note is that setkeyswitch operations to ON|DIAG|SECURE may
not be successful. PCD consistency is verified as part of POST and if
the PCD is inconsistent, POST does not continue as the state of the
platform is in question and further activity could interrupt running
domains. A typical POST failure indicative of PCD inconsistency is:

pcs_pcd_get_domain_info(): Golden sram for domain 7=H = IO14,
Not in active slot1 vector 00000
pcs_pcd_get_domain_info(): MAND Net for domain 7=H = IO14,
Not in active slot1 vector 00000
pcs_pcd_get_domain_info(): Golden sram for domain 8=I = IO16,
Not in active slot1 vector 00000
pcs_pcd_get_domain_info(): MAND Net for domain 8=I = IO16,
Not in active slot1 vector 00000
Exitcode = 44: Error accessing Physical Config Database

Kudos to: Scott Davenport
APPLIES TO: Hardware/Sun Fire /15000, Hardware/Sun Fire /12000

Read the rest of this entry...

Bookmark and Share My Zimbio

Thursday, July 2, 2009

Netstat Notes

Lists connections for ALL protocols & address families to and from machine
Address Families (AF) include:
INET - ipv4
INET6 - ipv6
UNIX - Unix Domain Sockets(Solaris/FreeBSD/Linux/etc.)

Protocols Supported in INET/INET6 include:
TCP, IP, ICMP(PING(echo/echo-reply)), IGMP, RAWIP, UDP(DHCP,TFTP,etc.)

Lists routing table
Lists DHCP status for various interfaces
Lists net-to-media table - network to MAC(network card) table

###NETSTAT Usage###
netstat - returns sockets by protocol using /etc/services for lookup
/etc/nssswitch.conf is consulted by netstat to resolve names for IPs

netstat -a - returns ALL protocols for ALL address families (TCP/UDP/UNIX)

netstat -an - -n option disables name resolution of hosts & ports

netstat -i - returns the state of interfaces. pay attention to errors/collisions/queue columns when troubleshooting performance

netstat -m - returns streams(TCP) statistics

netstat -p - returns net-to-media info (MAC/layer-2 info.) i.e. arp

netstat -P protocol (ip|ipv6|icmp|icmpv6|tcp|udp|rawip|raw|igmp) - returns active sockets for selected protocol

netstat -r - returns routing table

netstat -D - returns DHCP configuration (lease duration/renewal/etc.)

netstat -an -f address_family
netstat -an -f inet|inet6|unix
netstat -an -f inet - returns ipv4 only information

netstat -n -f inet
netstat -anf inet -P tcp
netstat -anf inet -P udp

Read the rest of this entry...

Bookmark and Share My Zimbio

Wednesday, July 1, 2009

MySQL Notes

pkginfo -x | grep -i mysql
Note: Current version of MySQL is NOT managed by SMF
Steps to Initialization of MySQL:
1. /usr/sfw/bin/mysql_install_db - initializes default DBs & tables
/usr/sfw/bin/mysqladmin -u root password 'abc123'
2. groupadd mysql && useradd -g mysql mysql && echo $?
3. chgrp -R mysql /var/mysql && chmod -R 770 /var/mysql && echo $?
4. installf SUNWmysqlr /var/mysql d 770 root mysql
5. cp /usr/sfw/share/mysql/my-medium.cnf /etc/my.cnf (global configuration)
6. /usr/sfw/sbin/mysqld_safe --user=mysql& - starts MySQL
7. symlink
ln /etc/sfw/mysql/mysql.server /etc/rc3.d/S99mysql
ln /etc/sfw/mysql/mysql.server /etc/rc0.d/K00mysql
ln /etc/sfw/mysql/mysql.server /etc/rc1.d/K00mysql
ln /etc/sfw/mysql/mysql.server /etc/rc2.d/K00mysql
ln /etc/sfw/mysql/mysql.server /etc/rcS.d/K00mysql

Note: MyISAM Tables usually contain at least 3 files:
1. .MYI - Index file
2. .MYD - Data File
3. .FRM - Form file(Describes Table Structure)

Note: Client options specified on command-line override all other instances of the opion.
Order of options/directives to be processed usually resembles the following:
1. /etc/my.cnf - global config file
2. /var/mysql/my.cnf - data-server specific config file
3. ~/my.cnf - user-specific config file
4. command line options

Note: Drop test database using the following syntax: 'drop database test;'
Note: You CANNOT drop the 'mysql' database because it contains the following critical information:
1. list of databases to manage
2. user table
3. privileges table

Note: MySQL creates 2 default users: 'root & anonymous'
Note: The anonymous user matches all unmatched users

Create MySQL User using the following command:
grant all privileges on *.* to 'unixcbt'@'localhost' IDENTIFIED BY 'abc123';

Note: After altering privileges, flush them to take effect using:
flush privileges;

Read the rest of this entry...

Bookmark and Share My Zimbio