Today I had an issue with a legacy fileserver running 15 disks pooled in LVM
running out of metadata space for partition additions.
Since we basically thrive off of partition additions for every customer,
you can see the problem.
Also, no disks can be added to the fileserver currently.
This is how I found the metadata amount currently:
fileserver:/root# vgs --units k -o vg_fmt,vg_attr,vg_mda_count,vg_mda_free vg01
Fmt Attr #VMda #VMdaFree
lvm2 wz--n- 10 0K
If your VMdaFree is 0K, your not going to have a good time.
So, what I did is the following to expand my metadata size to 2Mb which is enough
for way over 300-400 partitions easy:
* Shut down all daemons accessing any LVs in the group.
* Backup the metadata. Example:
vgcfgbackup -v -f /root/vg01-vgcfgbackup-2013-06-06 vg01
* umount all logical volumes
* LV change on all LVs. Example:
vgdisplay -v vg01 | grep -i "lv name" | awk '{ print $3 }' | xargs vgchange -an
*vgchange -an vg01
* pvdisplay and note UUID and /dev/path.
* Walk through each physical drive and add it's metadata.
Example (change UUID to the proper one, and sdb1 to your drive):
pvcreate -ff --metadatasize 2048k --metadatacopies 2 --uuid <UUID here> /dev/sdb1
* vgcfgrestore -f/root/vg01-vgcfgbackup-2013-06-06 -v vg01
* pvs -o +pv_mda_free - just to check if the metadata expanded
* Make all volume groups active: vgchange -ay
* Make all logical volumes active: lvchange -ay LV-Name
(replacing LV-Name with the logical volume)
* mount -a vg01
After doing that, running the vgs command returns the following:
fileserver:/root# vgs --units k -o vg_fmt,vg_attr,vg_mda_count,vg_mda_free vg01
Fmt Attr #VMda #VMdaFree
lvm2 wz--n- 15 959.00K
And everything is happy.
Like this:
Like Loading...
Related