Latest home storage upgrade, 400GB

|

One of the RAID1 arrays was failing in my IDE bandelero because of a bad hard disk. Worse because of kernel panics the reiserfs was corrupt and causing further kernel panics. It took me a long time pussy footing around the filesystem to get a good backup. I basically ran strace against rsync and waited for it to kernel panic reading the reiserfs filesystem. Then I gathered any lstat64 failures, rebooted and excluded those files from the backup. After about 20 kernel panic reboots and lstat64 results, I got a full backup. I'm switching back to ext3. Reiser did fine until the hardware failed. I was using reiser 3.6. I'm sure it's more stable now, but once burned twice shy.

Thankfully no critical files were included in the corruption. My old writeups call this an mp3 share, but it has grown more important than that in my adult life. It has tax returns, quickbooks files, laptop backups, appliance serial numbers and a lot of other important data. I even keep an offsite backup a few times a year.

After all that was done, I could move on to building a new array. Best buy had 200GB disks for $79 each. That's 40 cents per GB or 80 cents per RAID1 GB. That's a nice change from last time where it was $480 for 240GB which is $2 per GB or $4 per RAID1 GB. I'm still using the same promise chip (PDC20269) based Maxtor PCI card. What follows are the details of building a new 400GB logical volume from 2 RAID1 arrays with some helpful commands tacked on the end.

styx:/mnt# mdadm --create /dev/md1 -l 1 -n 2 /dev/hde1 /dev/hdg1
mdadm: array /dev/md1 started.
styx:/mnt# mdadm --create /dev/md2 -l 1 -n 2 /dev/hdf1 /dev/hdh1
mdadm: array /dev/md2 started.
styx:/mnt# pvcreate /dev/md1
pvcreate -- physical volume "/dev/md1" successfully created

styx:/mnt# pvcreate /dev/md2
pvcreate -- physical volume "/dev/md2" successfully created

styx:/mnt# vgchange -an vg0
vgchange -- couldn't  open volume group "vg0" group special file
vgchange -- try vgmknodes

styx:/mnt# vgremove vg0
vgremove -- volume group "vg0" doesn't exist

styx:/mnt# vgcreate -s 16m vg0 /dev/md1 /dev/md2
vgcreate -- INFO: maximum logical volume size is 1023.97 Gigabyte
vgcreate -- doing automatic backup of volume group "vg0"
vgcreate -- volume group "vg0" successfully created and activated

styx:/mnt# vgdisplay
--- Volume group ---
VG Name               vg0
VG Access             read/write
VG Status             available/resizable
VG #                  0
MAX LV                256
Cur LV                0
Open LV               0
MAX LV Size           1023.97 GB
Max PV                256
Cur PV                2
Act PV                2
VG Size               372.56 GB
PE Size               16 MB
Total PE              23844
Alloc PE / Size       0 / 0
Free  PE / Size       23844 / 372.56 GB
VG UUID               2wDBCV-NZFm-Lrm7-pvQQ-qxhW-c4wV-x06fMX


styx:/mnt# lvcreate -l 23844 vg0
lvcreate -- doing automatic backup of "vg0"
lvcreate -- logical volume "/dev/vg0/lvol1" successfully created

styx:/mnt# lvrename vg0 lvol1 lv0
lvrename -- doing automatic backup of volume group "vg0"
lvrename -- logical volume "/dev/vg0/lvol1" successfully renamed to "/dev/vg0/lv0"

styx:/mnt# mkfs.ext3 -jv -O dir_index,sparse_super /dev/vg0/lv0
mke2fs 1.37 (21-Mar-2005)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
48840704 inodes, 97665024 blocks
4883251 blocks (5.00%) reserved for the super user
First data block=0
2981 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 24 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
styx:/mnt# mount /mnt/family
styx:/mnt# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md0              8.6G  5.7G  3.0G  66% /
tmpfs                 379M     0  379M   0% /dev/shm
/dev/vg0/lv0          367G   33M  349G   1% /mnt/family
Here are a couple of helpful commands that weren't all readily apparent.
# Add raid arrays to volume manager
pvcreate /dev/md1
pvcreate /dev/md2
# take old offline if necessary
vgchange -an vg0
vgremove vg0
# new one w/ twice the stripe size we need
vgcreate -s 16m vg0 /dev/md1 /dev/md2
# get "Total PE" for next command
vgdisplay
# create logical volume
lvcreate -l 23844 vg0
# rename to match my vfstab
lvrename vg0 lvol1 lv0
# make a filesystem
mkfs.ext3 -jv -O dir_index,sparse_super /dev/vg0/lv0



# stop an array
mdadm --stop /dev/md1
# erase a raid componenets memories of it's former life
mdadm --zero-superblock /dev/hde1
# make a volume group inactive so it can be removed
vgchange -an vg0
# remove a volumen group
vgremove vg0
# remove a physical volume
pvremove /dev/md1