Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views24 pages

Linux File System

Uploaded by

anu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views24 pages

Linux File System

Uploaded by

anu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Linux Storage & Filesystems

Explained: From Partitions to LVM


(Part II)

This
article
turned out
a bit
lengthy
since it
covers
almost all
the

production-based setups of Linux storage. I’ve tried to keep it engaging and practical, so you
not only understand the concepts but also see how they’re applied in real-world scenarios.

In the previous article, we explored the conceptual side of Linux storage: partitions, filesystems,
and Logical Volume Management (LVM).
Today, we’ll continue the same subject but focus on the practical aspect.
We’ll walk through:
1. Creating a single partition
2. Creating multiple partitions with an extended partition
3. Setting up LVM
4. Performing operations on LVM

Environment Setup
For this exercise, I’m using an AWS EC2 Instance.
By default, AWS AMIs come with traditional partitioning, but I’ve attached five extra disks during
instance creation with sizes:

• 4 GiB
• 5 GiB
• 6 GiB
• 7 GiB
• 8 GiB

Step 1: Check Available Disks

First, connect to your EC2 instance via SSH.


Then run the following command to list all available block devices:
# lsblk

From the output, you can see that we have six block devices attached to the system:
1. /dev/xvda (This is main disk)
2. /dev/xvdb (4G)
3. /dev/xvdc (5G)
4. /dev/xvdd (6G)
5. /dev/xvde (7G)
6. /dev/xvdf (8G)
The main disk /dev/xvda already has multiple partitions and is mounted:

• xvda1 → mounted on /

• xvda15 → mounted on /boot/efi


• xvda16 → mounted on /boot
The other disks (xvdb, xvdc, xvdd, xvde, xvdf) are empty — they don’t yet have filesystems
and are not mounted to any directory.
These will be the disks we’ll use for our partitioning and LVM exercises.

Step 2: Create Two Partitions on /dev/xvdf (5 GiB + 3 GiB)


We’ll use fdisk (MBR) to split /dev/xvdf into two primary partitions:

• /dev/xvdf1 → 5 GiB
• /dev/xvdf2 → 3 GiB
Tools:

• fdisk → MBR/DOS partition tables

• gdisk → GPT partition tables


For this demo we’ll proceed with fdisk; feel free to explore gdisk on your own.

2.1 Launch fdisk


sudo fdisk /dev/xvdf

2.2 Create the first partition (5 GiB)


Inside fdisk, type the following keys:

• n → new partition

• p → primary

• 1 → partition number 1

• First sector → press Enter to accept default


• Last sector/size → type +5G

• p → print the table (to confirm)

2.3 Create the second partition (3 GiB)


• n → new partition

• p → primary

• 2 → partition number 2

• First sector → press Enter


• Last sector/size → type +3G (or press Enter to use the remaining space if it equals ~3 GiB)

• p → print again to verify both partitions


Step 3: Verify the Partitions
From the screenshots above, you can see that we’ve successfully partitioned /dev/xvdf into two
primary partitions:
• /dev/xvdf1 → 5 GiB

• /dev/xvdf2 → 3 GiB

At this point, the disk has been split into two usable partitions. Next, we’ll need to format them
with filesystems before mounting, but before moving on, it’s worth taking a quick look at the useful
flags inside fdisk.

Useful fdisk Commands to Explore


When you’re inside the fdisk prompt, the following single-letter flags are very handy:

• p → Print the current partition table

• n → Create a new partition

• d → Delete a partition

• t → Change a partition type (e.g., Linux, swap, LVM, etc.)

• l → List all known partition types


• v → Verify the partition table

• F → List free (unallocated) space

• i → Show detailed information about a specific partition


These are worth experimenting with to get comfortable managing disk

Step 4: Create Primary and Extended Partitions on a 50 GiB


Disk
Now let’s attach an additional 50 GiB volume (/dev/xvdg) to our EC2 instance.
We’ll use it to create 3 primary partitions and 1 extended partition.
Inside the extended partition, we’ll carve out logical volumes.

Partition Plan
• Primary Partitions
1. /dev/xvdg1 → 5 GiB
2. /dev/xvdg2 → 6 GiB
3. /dev/xvdg3 → 7 GiB

• Extended Partition (≈32 GiB)


1. /dev/xvdg5 → 6 GiB (Logical Volume 1)
2. /dev/xvdg6 → 7 GiB (Logical Volume 2)
3. /dev/xvdg7 → 8 GiB (Logical Volume 3)
4. /dev/xvdg8 → Remaining space (~11 GiB, Logical Volume 4)
Step 5: Understanding Primary vs Extended Partitions
We have now partitioned the block device /dev/xvdg into four partitions:

• 3 primary partitions
• 1 extended partition (which holds multiple logical partitions)
Since we are using traditional MBR partitioning with the fdisk tool, there are some limitations:

• An MBR disk can have a maximum of 4 primary partitions.


• To overcome this, one of those can be designated as an extended partition.
• Inside the extended partition, you can create multiple logical partitions.
• This is exactly what we have done with /dev/xvdg.
In contrast, if you use GPT partitioning (gdisk):

• There is no restriction on the number of partitions.


• There is also no concept of “primary” or “extended” partitions.
• GPT disks are more modern and flexible, which is why they are generally preferred for large
storage environments.
Step 6: Verify Partition Types and IDs
To check whether a partition is primary, extended, or logical, we use the fdisk -l command:

From the output, you’ll notice that each partition has an ID (Hex code) that indicates its type.
Some common IDs are:
• 82 → Linux Swap
• 83 → Linux Filesystem (default for ext4, xfs, etc.)
• 8e → Linux LVM (used when creating Physical Volumes for LVM)
• 05 → Extended Partition (to hold logical partitions)
These IDs help the system (and us) recognize how a partition is meant to be used.

In our case:

• The 3 primary partitions on /dev/xvdg show up as ID 83 (Linux)

• The extended partition shows up as ID 05


• Any logical partitions inside the extended partition also default to ID 83 unless change.
As we have the 32 Gb of extended partition lets create 4 logical partitions of 6G,7G, 8G, 11G,

# fdisk /dev/xvdg
Now, we can easily format all the primary and logical partitions with any filesystem of our
choice and mount them to any directory, making the devices fully usable.

Step 7: Moving to LVM (Logical Volume Management)


So far, we’ve learned how to:

• Create a single partition that uses the entire disk


• Create 3 primary partitions and 1 extended partition, then add logical partitions inside
the extended partition
Now, let’s move to the more advanced, widely used, and dynamic solution, LVM (Logical
Volume Management).
The biggest advantage of LVM is its flexibility:
• You can easily extend or shrink volumes
• Resize filesystems without downtime
• Move or migrate data between disks
• Create snapshots for backup or testing
If you’d like a refresher on the theory of LVM, check out my earlier article: Linux Storage &
Filesystems Explained: From Partitions to LVM

LVM Workflow Recap


LVM setup involves three simple steps:
1. Create Physical Volumes (PVs) from raw disks or partitions
2. Create a Volume Group (VG) that pools storage from one or more PVs
3. Create Logical Volumes (LVs) from the volume group, then format and mount them
Available Disks for LVM
We still have the following unused block devices available on our system:
• /dev/xvdc

• /dev/xvdd

• /dev/xvde

• /dev/xvdf (we initially created 2 primary partitions on it,


then removed them to make it available for LVM)
We’ll start by creating Physical Volumes (PVs) on two of these disks. Since we already have four
unused block devices, let’s go ahead and create physical volumes on any two of them to get
started.

1. Creating Physical Volumes:


To create physical volumes on our selected disks, we use the following command:
# pvcreate /dev/xvdc /dev/xvdd

Once created, we can verify the physical volumes and check if they are attached to any volume
group using these commands:

# pvdisplay
# pvs
Understanding Key Fields in pvdisplay Output
Let’s break down some important fields you’ll see when running the pvdisplay command:
1. Allocatable
This indicates whether the physical volume (PV) can be allocated to logical volumes.
• If the PV is not yet part of a Volume Group (VG), it cannot be used, and this field
shows NO.

• Once added to a VG, this field usually becomes YES.


2. PE Size
The size of each Physical Extent (PE) in the PV.
3. Total PE
Total number of physical extents in the PV.
4. Free PE
Number of extents available for allocation to logical volumes.
5. Allocated PE
Number of extents currently assigned to logical volumes.

What is a Physical Extent (PE)?


A Physical Volume is divided into the smallest units of storage called Physical Extents (PEs)
when it becomes part of a Volume Group.

• By default, each PE is 4 MB, but this can be configured to 8 MB, 16 MB, up to 16 GB,
depending on your needs.

Why PE is Important
• Suppose you create a logical volume of 100 MB. LVM will allocate storage in units of PEs.
• With the default 4 MB PE size, LVM will allocate:
100 MB / 4 MB = 25 PEs

• Since our PV is not yet part of a VG, all related fields (Allocatable, PE size,
Total PE, Free PE, Allocated PE) will show 0.

2. Creating Volume Groups:


To create a volume group, we use the vgcreate command:
# vgcreate <name-of-vg> /dev/xvdc /dev/xvdd
# vgcreate myvg /dev/xvdc /dev/xvdd

This command combines the selected physical volumes (PVs) into a single volume group (VG),
which can then be used to create logical volumes.
Understanding vgdisplay Output
After creating a VG, you can inspect it using:
# vgdisplay myvg

Here’s what the important fields mean:


• VG Name: myvg – the name you assigned to the volume group.

• System ID: Usually empty unless used in special cluster setups. Can store a system
identifier.
• Format: lvm2 – modern LVM format, almost always used today.

• Metadata Areas: 2 – indicates metadata is stored on two physical volumes (since we added
2 PVs).
• Metadata Sequence No: 1 – increments each time the VG is modified (adding/removing
PVs, creating/extending LVs).
• VG Access: read/write – the VG is modifiable.
• VG Status: resizable – new LVs can be created or extended.
• MAX LV: 0 – represents “unlimited” in LVM2, not literally zero.
• Cur LV: 0 – currently, no logical volumes exist.
• Open LV: 0 – no LVs are opened/mounted.
• Max PV: 0 – unlimited PVs allowed.
• Cur PV: 2 – this VG currently contains 2 physical volumes (/dev/xvdc and
/dev/xvdd).

• Act PV: 2 – both PVs are active and available.


• VG Size: 10.99 GiB – total size of the VG.
• PE Size: 4.00 MiB – size of each physical extent.
• Total PE: 2814 – total number of physical extents (10.99 GiB ÷ 4 MiB).

• Alloc PE / Size: 0 / 0 – no extents are allocated yet because no LVs exist.


• Free PE / Size:GiB – all space is free for creating logical volumes.
• VG UUID: wGLQzw-DSDd-ks9U-yTEB-VQ2r-lovq-0p6xPi – unique identifier of the VG.

3. Creating Logical Volumes:


Now that we have our volume group (myvg), let’s create two logical volumes:

• lv0 → 2 GB

• lv1 → 3 GB
Use the following commands:
# lvcreate -n lv0 -L 2G myvg
#lvcreate -n lv1 -L 3G myvg

Verifying Changes
After creating the first logical volume (lv0), you can run:
#vgdisplay myvg

This allows you to observe the changes in the volume group:


• Alloc PE / Size will increase, reflecting the extents used by the new logical volume.
• Free PE / Size will decrease, showing the remaining space available for future LVs.

Repeat the process after creating lv1 to see how the VG adapts as more logical volumes are added.
Exploring Logical Volumes:
Once your logical volumes are created, you can inspect them using the lvs and lvdisplay
commands. For example, running lvdisplay on lv0 shows:

--- Logical volume ---


LV Path /dev/myvg0/lv0
# This is the device file for your logical volume.
# You can format (mkfs.ext4 /dev/myvg0/lv0) and mount it just like a normal disk.
LV Name lv0
VG Name myvg0
LV UUID GNOhbA-9WNd-HDK8-OpTc-c2Tq-cPKe-MgbeAc
LV Write Access read/write
LV Creation host, time ip-172-31-18-128, 2025-08-20 20:22:57 +0000
LV Status available
# open 0
# No process has this LV open right now.
# If it was mounted, you’d see 1.
LV Size 2.00 GiB
Current LE 512
# Each Logical Extent (LE) = 4 MiB (from your vgdisplay).
# 512 × 4 MiB = 2048 MiB = 2 GiB
Segments 1
# The LV is stored in one contiguous chunk of space.
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
# This is the device major/minor number under /dev/mapper/.
# You can also access the LV as /dev/mapper/myvg0-lv0.

LVM commands cheatsheet:


1. pvcreate /dev/xvdc
2. pvcreate /dev/xvdd
3. pvs
4. pvdisplay
5. vgcreate myvg0 /dev/xvdc /dev/xvdd
6. vgs
7. vgdisplay
8. lvcreate -n lv0 -L 2G myvg0
9. lvcreate -n lv1 -L 3G myvg0
10. lvs
11. lvdispaly

Extending Logical Volumes in a Production Scenario


So far, we’ve seen a basic LVM setup: creating PVs, a VG, and a couple of LVs. But what happens
when one of our LVs starts to fill up? Or when the volume group itself is running out of space?
Let’s simulate a production scenario where lv0 gets filled so that we can practice extending it
without downtime.
Recall: our volume group (myvg0) is almost 11 GB in size, and the logical volumes we created are
2 GB (lv0) and 3 GB (lv1). This means we still have free space available in the VG that can be
allocated.
To fill lv0, we can use the dd command with /dev/zero:
Before that lets mount the the lv0 to /mnt/lv0

Filling and Extending a Logical Volume:


Now that lv0 is successfully mounted at /mnt/lv0, let’s fill it fully using the dd tool:
# dd if=/dev/zero of=/mnt/lv0/zerofile bs=1M status=progress

Check Current Usage


Before filling, df -h shows:
/dev/mapper/myvg0-lv0 2.0G 24K 1.8G 1% /mnt/lv0
This indicates that almost all space is free.
After running the dd command, lv0 is completely filled, and no more space is available.

run df -h again

Common LVM operations:


1. Extending the Logical Volume
In this situation, we can safely extend lv0 to increase its storage. LVM allows online resizing, so
the filesystem can continue to be used without downtime.
The next steps are:
1. Extend the logical volume using lvextend.
2. Resize the filesystem (for example, with resize2fs) to use the newly allocated space.
This demonstrates how LVM provides flexible storage management in production without
affecting mounted volumes.

# lvextend -L +2G /dev/myvg0/lv0

Even after extending a logical volume, you may notice that the filesystem size has not changed.
For example, running:

# df -h

might still show:


/dev/mapper/myvg0-lv0 2.0G 2.0G 0 100% /mnt/lv0
Resize the Filesystem
To make the newly allocated space usable, you need to resize the filesystem. For ext4, use:
# resize2fs /dev/myvg0/lv0
This will expand the filesystem to match the new size of the logical volume.
You should now see that /dev/myvg0/lv0 reflects the increased size, and additional space is
available for use.

Extending a Volume Group by Adding New Physical Volumes


Now, let’s say we have two additional block devices available:
• /dev/xvde

• /dev/xvdf
create Pvs of them

We have two options:


1. Create a new volume group from these PVs.
2. Extend the existing volume group (myvg0) by adding these PVs.
In this example, we’ll extend the existing VG:

# vgextend myvg0 /dev/xvde /dev/xvdf

What Happens Next


• The VG myvg0 now includes four PVs in total.

• The total available space in the VG increases, giving us more room to extend existing
LVs or create new LVs.
• You can verify the changes with:
# vgdisplay
# vgs
This demonstrates how LVM allows dynamic expansion of storage pools in production scenarios
without downtime.

2. Now Shrinking Lvs.


Shrinking LVs is generally not recommended because it can lead to data loss. If you must shrink
an LV, always unmount it first to avoid filesystem corruption.

Let’s first extend lv0 to use all remaining free space in the volume group:
# lvextend -l +100%FREE /dev/vg0/lv0

The LV will now include all unallocated physical extents in the VG.
• After extending, don’t forget to resize the filesystem so the LV can actually use the new
space:
# resize2fs /dev/myvg0/lv0
Difference Between -L and -l in lvextend
• -L <size> → Specify size in absolute units (e.g., 2G, 500M)

• -l <extents> → Specify size in physical extents (PEs)

• Example: -l +100%FREE means use all free extents in the VG

Now lets first format lv1 and mount it to /mnt/lv1


After formatting and mounting /dev/myvg0/lv1 lets follow the recommended steps to shrink the lv
lv1 safely.
a. Unmount firsrt
#umount /mnt/lv1
b. Run fsck to avoid corruption
#e2fsck -f /dev/myvg0/lv1
c. Shrink filesystem to 4G
#resize2fs /dev/myvg0/lv1 4G
d. Shrink LV
#lvreduce /dev/myvg0/lv1 -L 4G
d. Mount again
#mount /dev/mvg0/lv1 /mnt/lv1

Verification of Shrink:
Now run lsblk command to verify the shrink
#lsblk

3. Moving data between Lvs Live, with no downtime


LVM allows you to migrate data from one physical volume to another without unmounting the
logical volumes. This is especially useful when replacing disks or redistributing storage in
production.

Migrate Data from One PV to Another


For example, to move all data from /dev/xvdc to /dev/xvde:
# pvmove /dev/xvdc /dev/xvde

What Happens
• LVM copies all allocated extents from the source PV (/dev/xvdc) to the destination PV
(/dev/xvde).

• Logical volumes remain online, so services continue running with no downtime.


• Once migration completes, the old PV (/dev/xvdc) can be removed from the VG using:
# vgreduce myvg0 /dev/xvdc

Key Benefit:
This feature allows live data migration and hardware replacement without interrupting services,
making LVM ideal for production environments

4. Creating Snapshots
LVM allows you to take instant snapshots of a logical volume. Snapshots are useful for backups,
testing, or recovery, as they capture the LV’s state at a specific point in time.
Create a Snapshot
To create a snapshot of lv0:
#lvcreate -s -n lv0_snap -L 1G /dev/myvg0/lv0

Explanation of the command:


• -s → create a snapshot

• -n lv0_snap → name of the snapshot

• -L 1G → size of the snapshot (space reserved for changes)

• /dev/myvg0/lv0 → the original logical volume being snapshotted

How Snapshots Work


• The snapshot initially shares data with the original LV to save space.
• Only changes made after the snapshot consume the snapshot space.
• You can mount and access the snapshot like any LV, making it ideal for testing or backup
operations without affecting the original volume.

How LVM represents snapshots internally

When you create a snapshot, LVM uses several internal components:


1. myvg0-lv0-real

• The original backing volume for your LV (lv0).

• After snapshot creation, LVM internally renames the actual data store to lv0-real.

• Both the original LV and the snapshot reference this.


2. myvg0-lv0

• Your original logical volume.


• Acts as a mapping on top of lv0-real, so you continue using it normally.
3. myvg0-lv0_snap

• The snapshot LV (/dev/myvg0/lv0_snap).

• Appears as a frozen, point-in-time copy of lv0.


4. myvg0-lv0_snap-cow

• The Copy-On-Write (COW) storage area for the snapshot.


• Size you assigned (-L 1G) limits how much changed data it can track.

• Whenever data in lv0 changes, the original blocks are copied here, so the snapshot
still sees the unmodified data.

Making Changes in lv0

# echo “This is fir change” > /mnt/lv0/file1.txt


# echo “This is second change” > /mnt.lv0/file2.txt

Now lv0 has changed, but lv0_snap still preserves the original state.

Restoring the Original LV from a Snapshot


1. Unmount the LV
# umount /mnt/lv0

2. Merge the Snapshot


# lvconvert --merge /dev/myvg0/lv0_snap

This restores the original LV to the state captured in the snapshot.


3. Remount and verify

# mount /dev/myvg0/lv0 /mnt/lv0

Now lv0 reflects the snapshot state, effectively undoing changes made after the snapshot.

5. Thin Provisioning
Thin Provisioning is a smart way to allocate storage space on demand rather than reserving the full
size upfront.
Normally, when you create a 100 GB logical volume (LV), the entire 100 GB is immediately
reserved from the volume group (VG), even if you only store 1 GB of data. With thin provisioning,
you can still create a 100 GB LV, but if you only store 1 GB, it will consume just ~1 GB from the
VG.
This allows you to overcommit storage, create LVs larger than your available physical storage,
while only actually using the space that has data written to it.

How It Works
1. Create a thin pool LV inside a VG

• The pool dynamically manages space allocation.


2. Create thin logical volumes (thin LVs) inside the pool
• These LVs look like normal LVs to the OS, but they don’t consume their full size
until data is written.

Analogy
Imagine you own a parking lot with 10 spaces. You issue 50 parking permits to employees.
• Not everyone comes to the office daily, so usually only 2–3 cars are parked.
• This allows you to oversubscribe permits compared to actual capacity.
• But if all 50 employees come at once, you’ll run out of parking.
Thin Provisioning = efficient space usage + flexibility, but with risk if overcommitted.

Hands-On Example
Assume we have a 50 GB block device /dev/xvdg.
1. Create a physical volume (PV)
# pvcreate /dev/xvdg

2. Create a volume group (VG)


# vgcreate thinvg /dev/xvdg

3. Create a Thin pool (10 GB)


# lvcreate -L 10G -T thinvg/thinpool

4. Create Thin Lvs (50G and 100G) inside the pool


# lvcreate -n thinlv1 -V 50G -T thinvg/thinpool
# lvcreate -n thinlv2 -V 100G -T thinvg/thinpool

5. Format and mount the thin LVs


# mkfs.ext4 /dev/thinvg/thinlv1vcreate -n thinlv1 -V 50G -T thinvg/thinpool
# mount /dev/thinvg/thinlv1 /mnt/thinlv1
# mkfs.ext4 /dev/thinvg/thinlv2
# mount /dev/thinvg/thinlv2 /mnt/thinlv2

6. Write data
Example: 2 GB on thinlv1 and 1 GB on thinlv2

7. What if pool fills?


If you keep writing and the thin pool runs out of space (10 GB full), writes will fail with
I/O errors.
Solution: extend the pool dynamically:
#lvextend -L +5G myvg/thinpool

In the upcoming articles, we’ll dive into troubleshooting storage with different tools and explore
advanced storage solutions (NFS, Longhorn, Ceph, OpenEBS, NAS, etc)
For production-level insights into Linux storage and filesystems, make sure to follow me on
LinkedIn and Medium.

You might also like