The Logical Volume Manager (henceforth LVM) consists of 3 components: physical volumes or PVs, volume groups or VGs and logical volumes or LVs. One or more PVs make up a VG. And one VG can be split into one or more LVs.
To use LVM we have to follow the procedure:
– create one or more PVs
– create a VG made up of one or more PVs
– create one or more LVs from the VG
PVs can be created in whole disks, disk partitions, meta-devices or loopback files. Usually it is best to use whole disks. But in cases where we have few very large disks, we might want to partition them and create PVs in those partitions for extra flexibility.
Let’s assume we have 3 disks ( /dev/sd[bcd]) and we want to create a PV on them. If the disks have a partition table (they might have been used for something else before) we might want to wipe it out with the dd command (it is faster than with fdisk):
# dd if=/dev/zero of=/dev/sdb bs=512 count=1 → wipes out 1st sector where partition table resides
# dd if=/dev/zero of=/dev/sdc bs=512 count=1 → same for sdc
# dd if=/dev/zero of=/dev/sdd bs=512 count=1 → same for sdd
# pvcreate /dev/sdb /dev/sdc /dev/sdd → create a physical volume on each of sdb, sdc & sdd
After creating the PV we can use the command lvmdiskscan to check that the 3 devices are listed as LVM physical volumes. We can also list the details of those PVs with the pvdisplay command:
# pvdisplay /dev/sdb
“/dev/sdb” is a new physical volume of “1.00 GiB”
---
NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 1.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 5LhiFW-Kg2v-238c-KhTi-ds0G-VbO5-ExTFKH
We can also use the pvscan command to confirm yet again that the 3 devices are PVs:
# pvscan
PV /dev/sda3 VG vg lvm2 [9.51 GiB / 0 free]
PV /dev/sdc lvm2 [1.00 GiB]
PV /dev/sdd lvm2 [1.00 GiB]
PV /dev/sdb lvm2 [1.00 GiB]
Total: 4 [12.51 GiB] / in use: 1 [9.51 GiB] / in no VG: 3 [3.00 GiB]
Now we can proceed with the creation of the VG that we shall call “vg0”:
# vgcreate vg0 /dev/sdb /dev/sdc /dev/sdd
Volume group “vg0” successfully created
As we did with the PVs, we can check the new VG is there with the commands vgscan and vgdisplay:
# vgscan
Reading all physical volumes. This may take a while…
Found volume group “vg0” using metadata type lvm2
Found volume group “vg” using metadata type lvm2
.
# vgdisplay vg0 -v
Using volume group(s) on command line
Finding volume group “vg0”
— Volume group —
VG Name vg0
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 3
Act PV 3
VG Size 2.99 GiB
PE Size 4.00 MiB
Total PE 765
Alloc PE / Size 0 / 0
Free PE / Size 765 / 2.99 GiB
VG UUID 93AzK2VqE7xgFfJDCMseYXKcyZ8qlmI9
.
---
Physical volumes ---
PV Name /dev/sdb
PV UUID 5LhiFWKg2v238cKhTids0GVbO5ExTFKH
PV Status allocatable
Total PE / Free PE 255 / 255
.
PV Name /dev/sdc
PV UUID VL1iHuFimIgGsFu7UbTxdyq4HmdKenpC
PV Status allocatable
Total PE / Free PE 255 / 255
.
PV Name /dev/sdd
PV UUID 41Jirk7B2Ogx5ttre4NpPU10Cg0EiC1T
PV Status allocatable
Total PE / Free PE 255 / 255
If we run out of space and we need to add storage to the VG, we can do so creating a new PV with the pvcreate command and then run vgextend:
# pvcreate /dec/sde
# vgextend vg0 /dev/sde
What if a disk becomes faulty and we need to move all the data out before removing it? We have to use pvchange, pvmove, vgreduce and pvremove:
# pvchange -xn /dev/sdd → mark the PV as not available for further allocation of extents
# pvmove /dev/sdd → force for all the PV data to be moved to other PVs within the same VG
# pvmove /dev/sdd /dev/sdb → …or move data onto sdd…
# pvmove /dev/sdd:0999 /dev/sdb → …or move extents 0999 onto sdb…
# pvmove /dev/sdd:1000+999 /dev/sdc → …and extents 10001999 onto sdc…
# pvmove -n lvol3 /dev/sdd /dev/sde → we can also move just data belonging to lvol3 onto sde
Once the PV has been emptied, we can remove it from the VG and eliminate it as a PV:
# vgreduce vg0 /dev/sdd
# pvremove /dev/sdd
Now we can use the vgdisplay and pvscan commands to make sure everything is as it should be.
We can resize a PV with the pvresize command. If we want to increase its size, we should use parted before to make the partition bigger. If the PV uses a whole disk (physical or virtual), the disk should be resized larger before running the pvresize command.
# pvresize /dev/sdb → resizes the PV to the new disk of the underlying disk or partition
If we need to make it smaller, we should reset the PV to a smaller size before running fdisk to reset the real size of the partition.
# pvresize
--
setphysicalvolumesize 1024m
If there are allocated extents in the part we intend to chop, pvresize will obviously return an error.
We have the vgchange command that allows us to change some properties of a VG. The most common changes we might need to perform on a VG are:
# vgchange -a → activates all VGs in the system
# vgchange -a -n vg3 → deactivates VG vg3
# vgchange -a ay → auto-activates all VGs in the system at boot time (unless in lvm.conf
. there is an explicit list of VGs that should be activated)
# vgchange --
resizable y vg0 → enables the resizing of vg0
If we happened to use LVM1 VGs, we can convert them to LVM2 with the vgconvert command:
# vgconvert -M2 vg11 → converts the LVM1 metadata of VG vgl1 to LVM2 format
The vgreduce command lets us remove all empty PV from a VG in one go:
# vgreduce -a vg12 → drops all empty PVs from VG vg12
It also allows us to drop any missing PV from a VG:
# vgreduce --
removemissing vg13 → drops all empty PVs from VG vg12
# vgreduce --
removemissing --
force vg13 → will drop missing PVs forcefully
Renaming a VG is trivial with vgrename:
# vgrename vg13 vgbackup → renames vg13 as vgbackup
We can split a VG with vgsplit by naming the spin-off and specifying the PVs that will form it:
# vgsplit vg12 vg14 /dev/sdh /dev/sdi → creates the new VG vg14 by assigning it the
. PVs sdh & sdi, formerly part of vg12
We can do the opposite and merge two VGs with vgmerge:
# vgmerge vg15 vg16 → the contents of vg16 are merged with vg15, and vg16 disappears
We can drop a VG with the vgremove command. If there are any active LV on it we will get a prompt asking to confirm the operation unless we used the “-f” flag.
The meta-data of all VGs in the system is automatically backed-up to its default location in /etc/lvm/backup (one file for each VG). That can be changed by customising /etc/lvm/lvm.conf. But we can take a manual backup with the vgcfgbackup command.
If we need to move a VG to another host, we can do it with the vgexport and vgimport commands.
# vgchange -a -n vg0 → mark vg0 as inactive
# vgexport vg0 → readies the VG for moving and makes it inaccessible
Then we move the underlying devices to the new host and in the new host …
# pvscan → should show the devices of VG if they can be detected. If so…
# vgimport vg0 → then we run and voila, we have vg0 in the new host but…
# vgchange -a -y vg0 → we need to activate it first before we can access it
Creating LVs is easy peasy with lvcreate:
# lvcreate -L 1g -n lv1 vg0 → create the LV lv1 in the VG vg0 with a size of 1G
# lvcreate -l 20%VG -n lv2 vg0 → create lv2 in vg0 using 20% of space in the VG
# lvcreate -l 40%FREE -n lv3 vg0 → create lv3 in vg0 using 40% of free space left in V
# lvcreate -L 300m -n lv4 vg0 /dev/sdb → create lv4 in vg0 using 300M only in PV /dev/sdb
# lvcreate -L 300m -T -n lv4 vg0 /dev/sdb → same as above but with thin provisioning
# lvcreate -L 400M -n lv5 vg0 /dev/sdc:0100 → create lv5 in vg0 using 400M in PV /dev/sdc
. on extents 0 to 100 (assuming 4M extent size!)
The LVs created above are linear ones (no mirroring or stripping). If we have a VG with several physical disks (and different controllers?) we might want to use stripped LVs to improve performance. To do that we use the “-i” (--
stripes) and “-I” (--
stripesize) flags. The former states the number of stripes or PVs on which to stripe the LV. The latter states the stripe size. For example:
# lvcreate -L 1g -i 4 -I 1024 -n lv5 vg1 → creates lv6 on vg0 striping data over 4 PV with a stripe-size of 1M
The LV lv5 should be much better performance-wise than the other LVs created so far but it is not any more proof to PV loss. To build some capability to sustain the loss of one or more PVs we need to use some kind of LVM RAID. There are 5 RAID types available:
– RAID1: straight mirroring without stripping
– RAID4: stripping with dedicated PV for parity
– RAID5: stripping with parity scattered across all PVs
– RAID6: same as RAID5 but with 2 parity PVs
– RAID10: stripping of mirrors
Let’s create a 1Gb LV of type RAID1 with just one mirror called lv7 on the VG vg0:
# lvcreate --
type raid1 -m 1 -L 1g -n lv7 vg0
And another 1Gb LV of type RAID5 with 3 stripes (3 stripes + 1 parity = 4 PVs) of 1Mb stripe-size:
# lvcreate --
type raid5 -i 3 -I 1024 -L 1g -n lv8 vg0
We can do the same with a RAID6 bearing in mind that it needs 2 parity PVs. For instance, 3 stripes + 2 parities = 5 PVs required.
LVM also allows to migrate from one layout to another. Let’s say we have the LV lvsoftware in the VG vgdata set up as an old-style mirror and we want to change it to RAID1. We can do so like this:
# lvconvert --
type raid1 -m 1 vgdata/lvsoftware
Depending on the size of the LV it might take more or less to do the transformation. We can track progress with the command lvs:
# lvs -a -o name,copy_percent,device vgdata/lvsoftware
LV Copy% Devices
lvsoftware 38.34 lvsoftware_rimage_0(0),lvsoftware_rimage_1(0)
We can change from any RAID type, linear or stripped layout to anything else as long as we have the space and the number of PVs required to perform the change. For example, we have the LV lvsoftware set up as RAID1 and using the PVs /dev/sda4 & /dev/sdb4 . We can transform it into a linear volume using just /dev/sdb4 and removing /dev/sda4 from the LV by running:
# lvconvert -m0 vgdata/lvsoftware /dev/sda4
If we have a RAID1 with one or more mirrors in sync, we can split a set of mirrors as a new LV (e.g. for testing). For instance, the command …
# lvconvert --
splitmirrors 1 -n lv10-test vg0/lv10 /dev/sdj /dev/sdk /dev/sdi
… would split one mirror from lv10 and name it lv10-test. We have specified explicitly the list of PVs that we wanted split from lv10 but that’s optional and most times it should not be necessary. So we could have run this instead …
# lvconvert --
splitmirrors 1 -n lv10-test vg0/lv10
Splitting the mirrors into a new read-write group is an irreversible operation. If we wanted to add the mirror back, we would have to come up with the empty PVs and add the mirror again. On the other hand, if what we want is a read-only copy of the mirror that would later be resynched, then we can accomplish that with the command:
# lvconvert --
splitmirrors 1 --
trackchanges vg0/lv10 → splits the mirror to use as a backup or testing volume
. lv10_rimage_2 split from lv10 for readonly purposes.
# lvconvert --
merge vg0/lv10_rimage_2 → merges or resynchs the split mirror
We’ll see the name of the read-only LV in the output of the command (e.g. lv10_rimage_2 ) as that LV cannot be named. When we want to drop the read-only LV and re-merge the mirror we execute the command hinted above.
Extending a linear LV is pretty straight-forward:
# lvextend -L 1g -r vg0/lv13 → resets the size of LV lv13 to 1G
# lvextend -L +2g -r vg0/lv13 → grows the current size by 2G
# lvextend -l +30%FREE -r vg0/lv13 → grows the current size by 30% of the free space left in vg0
# lvextend -l 40%VG -r vg0/lv13 → resets the size so that it is 40% of the total space in vg0
# lvextend -l +250 -r vg0/lv13 → grows the current size by 250 extents (4M x 250 = 1G)
Pay attention to the “-r” flag that resizes the underlying file system at the same time making it unnecessary to run resize2fs immediately after completing the resizing operation.
Extending RAIDed LVs might involve adding PVs before executing the lvextend command. For example, if we have a 2-way RAID1 and at least one of the devices does not have enough space to accommodate our growth targets, then we would have to add 2 new devices to the VG. Or if we have a RAID5 (4 data PVs + 1 parity) and we have run out of space in all 5 devices, then we would need to add another 5 PVs to the VG.
Another very useful feature of LVM is the possibility of taking snapshots. The following command takes a snapshot of the /dev/vgdata/oracle logical volume, calls it /dev/vgdata/orasnap and allocates 100 Mb to it.
# lvcreate -L 100m -s -n orasnap vgdata/oracle
When using snapshots we should monitor space utilisation (with lvdisplay / lvs / etc) because as soon as it hits 100% the snapshot becomes invalid and useless (i.e. out of sync). We can take snapshots to replicate volumes for testing or to take backups. The snapshot can be mounted as any other filesystem but if the underlying filesystem is XFS, we will have to use the flag “-o nouuid “ as it would otherwise fail to mount due to a duplicate UUID. When we are done with a snapshot and want it removed…
# lvremove vgdata/orasnap
Logical Volume Manager has quite a few other advanced options…
– repairing RAIDs
– replacing devices
– scrubbing volumes
– controlling RAID I/O and failure policies
– creating and managing thin-provisioned volumes
– customising LVM settings in /etc/lvm/* files
… but rather than covering them here I would suggest system administrators dealing with those more complex topics to go check Steven Levine’s manual at: