[Cialug] mirrored raid

Zachary Kotlarek zach at kotlarek.com
Sun Apr 10 22:38:51 CDT 2016


On 9 Apr 2016, at 18:22, Dan Hockey wrote:

> I totally screwed things up and need to start over. Any suggestions on how
> to start over? Should I use the old drives first or the new drives? Clean
> re-install of omv? its getting late.screw it.


I’m not sure what you mean by starting over so I don’t have particular advise on that. If you’ve got a stable copy of the data someplace other than on the new disks and aren’t worried about keeping the array continuously online I’d suggest you just make a new array on the new disks using whatever OVM typically suggests and restore to them from the old disk(s) or other backup after it’s setup.

More generally, here are things that would provide useful information for debugging the sub-filesystem level happenings:

cat /proc/mdstat
	displays the status of all active md arrays

mdadm --examine /dev/sd<foo>
	displays the md header info from a physical disk

mdadm detail /dev/md<foo>
	displays detailed md array information for the logical array

lvdisplay
	displays information about LVM volumes. Optionally takes a LVM volume device file path or VG/LG name if you only want details about a specific volume.

vgdisplay
	displays information about LVM volume groups. Optionally takes a volume group name if you only want details about a specific volume group (though you probably only have one)

pvdisplay
	displays information about an LVM physical volume. Optionally takes a device path if you only want details about a specific device.

—

When growing a logical disk you need to expand each of the layers starting at the bottom and working your way up. Here are instructions for common block device abstraction layers.

The ordering here reflects that way I use these tools; you may only use some of these layers or may order them differently and should follow your own bottom-to-top order.

If you use a user interface abstraction many of these steps may be combined, but it’s still useful to know what’s it’s doing underneath if something goes wrong. md, LVM, dm-crypt and other device-mapper based tools can all be resized live (and often *must* be live). Most modern filesystems can as well, though details vary among filesystems.

1. Device partition table. If you partitioned the device and added the partition to and md array or an LVM volume group start by expanding the partition using your usual partitioning tool. If you are using the whole device for the higher layers this step is not required.

2. Grow the md array. If you add new, bigger disks to an existing md array it does not automatically resize the array — it just leaves extra space unused after syncing. The “examine” verb in mdadm will show you both the “Avail Dev Size” and “Used Dev Size” so you can see if that’s happening. You can make those the same with something like:
	mdadm --grow /dev/md<foo> --size=max
You can provide a number of kilobytes instead of “max” if you do not want to consume the entire underlying block device. If you have the write-intent bitmap enabled (i.e. if /proc/mdstat contains lines like “bitmap: 0/30 pages [0KB], 65536KB chunk” you will first need to disable that with:
	mdadm --grow /dev/md<foo> -b none
and then after the resize is done re-enable the bitmap with:
	mdadm --grow /dev/md<foo> -b internal

3. Grow the dm-crypt volume. If you change the block device under a dm-crypt volume you can resize it with:
	cryptsetup resize cryptdisk
where “cryptdisk” is the dm-crypt name for an open dm-crypt device (often the last part of the name from devices files in /dev/mapper/*). You can optionally supply a size parameter if you don’t want to use the entire underlying block device. Be careful if you specify a size because you can shrink the volume.

4. Grow the LVM volume group. I believe LVM volume groups will resize automatically when the underlying disk changes in kernel (which may require a reboot or `partprobe` if using a partition) — I personally never remember doing this, but I might just be forgetting. In any case you can force the issue with:
	pvresize /dev/sd<foo>
which will re-scan that physical device and update the related volume group. LVM can only use whole block devices so if you don’t want to use the whole disk you must partition at a lower layer.

6. Grow the LVM volume. This can be done anytime there is available space in the volume group, which you can see with `vgdisplay`:
	lvextend -L +4T lvmgroup/lvmvolume
where “+4T” is a relative size adjustment for the logical volume “lvmvolume” in the volume group “lvmgroup”. You can also provide a device file path if you don’t know the group and volume names. You can specify an absolute size (no plus sign) and a number of other unit prefixes. Be careful if you specify an absolute size as it can shrink the volume.

7. Grow the filesystem. The details here vary depending on your filesystem type, but you need to tell the filesystem about the new space. For xfs you mean:
	xfs_growfs /live/mount/point
or for ext4 you mean:
	resize2fs /live/mount/point

	Zach
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2320 bytes
Desc: S/MIME digital signature
URL: <http://cialug.org/pipermail/cialug/attachments/20160410/a1ef03b0/attachment.bin>


More information about the Cialug mailing list