Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
16 views116 pages

Storage

This document provides a comprehensive guide on configuring and managing swap space and mounting filesystems in Linux. It explains how to create swap partitions and files, format them, and ensure they are automatically mounted at boot time using the /etc/fstab file. Additionally, it covers commands for checking swap usage and filesystem details, as well as the importance of using UUIDs for device identification.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views116 pages

Storage

This document provides a comprehensive guide on configuring and managing swap space and mounting filesystems in Linux. It explains how to create swap partitions and files, format them, and ensure they are automatically mounted at boot time using the /etc/fstab file. Additionally, it covers commands for checking swap usage and filesystem details, as well as the importance of using UUIDs for device identification.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

Visit www.kodekloud.com to discover more.

Now, let's look at how to configure and manage swap space in Linux.
In this lesson we'll discuss how we can create a so-called swap partition. Swap is an area where Linux can temporarily move
some data from the computer's Random Access Memory (RAM). To understand this mechanism let's go through an overly
simplified scenario.

Let's imagine we have a computer with 4GB of RAM. We open a video editor and this uses 2GB of RAM. We open an audio
editor and this needs another 2GB. Now we have no more free memory. But we have a 2GB swap partition. Although no
more RAM is available, we'll see we can still open Chrome. How is this possible? When we want to open
Chrome the following happens: Linux sees no more RAM is available. But it also sees that we didn't use the
video editor in the last 1 hour. It's basically just sleeping there, inactive. So, it decides that it can move the
data in memory used by the video editor, to our swap partition. By moving data from RAM to swap, it freed up
2GB of our RAM. So, Chrome can now use that free memory.
Now let's see some commands. To check if the system uses any kind of swap areas, we can use this command:

swapon --show

Since we don't have swap set up yet, the command will output nothing at this point. So let's set up a partition to be used as
swap.
First, we take a look at what partitions we have available:

lsblk

In the previous lesson, we created the partition at vdb3 specifically to be used as swap. This is currently
empty, with no data inside. Before it can be used as swap it has to be prepared. If you ever formatted an USB
stick with a filesystem like FAT32 or NTFS, this is a similar process. It basically writes some small header data
on the partition, and labels it, so that the system knows this is meant to be used as a swap area.
To "format" it as swap, we'll use this command:

sudo mkswap /dev/vdb3

Now we can tell Linux: "Use this partition as swap:"


sudo swapon --verbose /dev/vdb3

The verbose option makes the command output more details about what it's doing. An equivalent command
is:

sudo swapon -v /dev/vdb3

Seems everything worked perfectly. We can check with:

swapon –show

once again. And this time the command will list our swap partition here.
However, there's still a problem. If we reboot this system, /dev/vdb3 won't be used as swap anymore. Because the change
we made is temporary. But we'll see how we can tell Linux to automatically use this as swap, every time it boots up, in
another lesson where we learn how to mount partitions.

To stop using our partition as swap space, we can enter this command:
sudo swapoff /dev/vdb3

Instead of partitions, we can also use simple files as swap. In fact, if you install Ubuntu on a personal
computer, that's what it will do by default nowadays. Set up swap on a file.

To do this manually, first, we'd need to create an empty file and fill it with zeroes. That is, binary zeroes,
because in binary we can have either a 0 or a 1. We'll use a utility called dd and we'll have these parameters in
the command:

if=/dev/zero

This tells it to use the input file at /dev/zero. It's a special device file that generates an infinite number of
zeroes when an application reads from it.

After we read from that file, we need to tell dd where to write the output. Basically, it will copy the input file
to the output file. So we'll tell it to write the contents in a file we'll store at this path: /swap.

of=/swap

Then we specify the block size to be 1MB:

bs=1M

This, combined with the next parameter:

count=128

tells dd to write a 1 Megabyte block 128 times. So, we'll get 128 megabytes in total.
Our final command will look like this:

sudo dd if=/dev/zero of=/swap bs=1M count=128

This file is small so it will get written fast. But if we have a large file, we might have to wait a while. And dd
won't show us how much % it has written. But we can add another parameter, status=progress. This will make
it display the progress it's making while writing to that file. Here's an example command writing 2GB of data:

sudo dd if=/dev/zero of=/swap bs=1M count=2048 status=progress

Regular users should not be allowed to read this swap file. That's because this potentially gives them access
to the memory contents of programs other users might be using. So, we'll set permissions to only allow the
root user to read and write to this file:

sudo chmod 600 /swap


Formatting this as swap is the same as before. But instead of using a file such as /dev/vdb3 that points to a partition, we
just use a regular file in our command:

sudo mkswap /swap

And now we can use this as swap on our system:


sudo swapon --verbose /swap

We can check active swap areas again:

swapon --show

And we'll see it's indeed used as swap.

Note how we can use multiple things as swap, at the same time. In our case we use the vdb3 partition as
swap, but also our file. We can also use multiple files, or multiple partitions if we want to.
Visit www.kodekloud.com to discover more.
Now let's explore how to mount filesystems, both manually, but also automatically, at boot time.
In a previous lesson we explored how to create filesystems. But you might have noticed, there's still no way for us to access
those filesystems. How do we create files and directories on them?

To make a filesystem accessible we must first mount it. Mounting basically means attaching or plugging in a filesystem to
one of our directories. Here's how it works. First, we can take a look at a directory often used for temporarily mounting a
random filesystem:
ls /mnt/

This is currently empty. Now let's say we want to mount the XFS filesystem created in one of our previous
lessons. This is stored on the partition at /dev/vdb1. To mount it, we use this command:

sudo mount /dev/vdb1 /mnt/

With the filesystem mounted at /mnt/, we can now easily create a file on it:

sudo touch /mnt/testfile

And we can see that our previously empty filesystem now has one file inside:

ls -l /mnt/

We can use the lsblk command to confirm that our filesystem is indeed mounted:

lsblk
To unmount a filesystem, we use the umount command. We'd expect it to be unmount, with an n, but it's umount, without
the n letter.

To unmount, we just specify the directory where our filesystem is mounted:

sudo umount /mnt/


Now lsblk confirms this is unmounted:

lsblk

And we can see /mnt/ is once again an empty directory, with no contents inside:

ls /mnt/
We notice in the lsblk command that some filesystems on some partitions are already mounted. For example, /dev/vda2 is
mounted in the /boot/ directory. When a Linux operating system boots up, it automatically mounts some filesystems. It
does this according to instructions in a file. Let's see how we can make it mount our XFS filesystem at /dev/vdb1. First, we'll
create a directory that we intend to use as our mounting point:

sudo mkdir /mybackups/


/etc/fstab is the file where we can define what should be mounted automatically when the system boots up. Let's edit it:

sudo vim /etc/fstab

A line in fstab has 6 fields. Let's take a look at what each one represents.
The first usually points to the block device file that represents a partition or some kind of storage space. In
the line we will add, this will be the block device file /dev/vdb1 representing the first partition on our second
virtual disk. To be more specific, by writing /dev/vdb1 in our first field, we tell Linux "mount the filesystem
contained in the partition at /dev/vdb1".

The second field specifies the mount point. In our case, this will be the directory where we want to mount our
filesystem, /mybackups.

On the third field we specify the filesystem type. This is xfs in our case. But if we'd want to automount
/dev/vdb2 instead of /dev/vdb1 we'd type ext4 since we have an ext4 filesystem on the other partition.

The fourth field will contain the mount options. We'll type defaults here to use default mounting options. But
we'll take a brief look at custom mount options in one of our next lessons.

The fifth field determines if an utility called "dump" should backup this filesystem. 0 means backup disabled, 1
means backup enabled. dump is rarely used these days so we can usually just set the fifth field to 0.

Filesystems can sometimes get corrupted. And the last field decides what happens when errors are detected.
We can have three values here: 0, 1, or 2. 0 means the filesystem should never be scanned for errors. 1 means
this filesystem should be scanned first for errors, before the other ones. 2 means this filesystem should be
scanned after the ones with a value of 1 have been scanned. In practice, we should often write "1" here for
the root filesystem (where the operating system is installed). And we should write "2" for all the other
filesystems.

We'll end up with this line in our fstab file:

/dev/vdb1 /mybackups xfs defaults 0 2

If we would have wanted to mount our other filesystem, stored on the second partition, we would have used
a line like this:

/dev/vdb2 /mybackups ext4 defaults 0 2

But we won't use this here since it would interfere with our previous line that also automounts in the
/mybackups directory.

Let's save this file.

If we edit fstab and don't intend to reboot the system, systemd, the daemon that takes care of important
things on the operating system won't know about our changes. With the next command, we can force it to
pick up on any changes we made to important configuration files, fstab included:

sudo systemctl daemon-reload

If we reboot, it will pick up on these changes automatically.


Now if we look at this directory:

ls /mybackups/

we'll see it's empty, as expected.


Also,

lsblk

will show nothing is mounted in /mybackups/.


But if we reboot:

sudo systemctl reboot

and then SSH back in, and enter this command:


ls -l /mybackups/

we'll see the file we created on our XFS filesystem at the beginning of this lesson.

Also,

lsblk

will confirm that everything worked as expected. Our filesystem was automatically mounted to the
/mybackups directory when our operating system booted up.
We can usually figure out what we need to write in /etc/fstab just by looking at existing entries. But if you ever forget what
each field represents, remember that the

man fstab

command can be helpful.


In a previous lesson we created a swap partition at /dev/vdb3. It would be useful if this would be mounted automatically at
boot time too. So, let's edit fstab again:

sudo vim /etc/fstab

We'll add this line:


/dev/vdb3 none swap defaults 0 0

We see this line is similar to the line we created for our xfs filesystem. There are only two notable differences
in the second and third field. The second field should normally contain the mount point, the directory a
filesystem should be mounted in. But we specify none here, since swap is not meant to be mounted in any
directory. In the third field, the filesystem type, we specify this is swap. And, finally, we use "0 0" in the last
two fields as swap is not meant to be backed up or scanned for errors.

If we would reboot and then use this command:

swapon --show

We would see that our swap partition was automatically mounted at boot time.
In the /etc/fstab file we can also notice lines like these:

UUID=ec83bbc6-458e-490c-9188-2d5cc97a0e20 /boot xfs defaults 00

Or
/dev/disk/by-uuid/ec83bbc6-458e-490c-9188-2d5cc97a0e20 /boot ext4 defaults 0 1

Note the comment above the /dev/disk/ line.

These lines can be equivalent to something like /dev/vda2, mounted in the /boot/ directory. So the lines
above could have also be written this way:

/dev/vda2 /boot ext4 defaults 01

So why was a UUID, universally unique identifier, used instead of a block device file like /dev/vda2? Imagine a
real server with 2 SSDs connected to some SATA ports on the motherboard. First SSD could be found at
/dev/sda. Second SSD at /dev/sdb. But if we remove the SATA cables and connect them in reverse order on
the motherboard, first SSD could be found at /dev/sdb instead of /dev/sda, the next time we boot. Linux
assigns these /dev/vda, or /dev/vdb device names in the order it sees they're connected to the motherboard.

Mounting the wrong thing in the wrong place because the device name has changed could lead to bad results.
Therefore, UUIDs can be used instead. These remain the same, even if we connect our storage devices in a
different order on the motherboard.

To check the UUID of a block device, we can use this command:

sudo blkid /dev/vdb1

Also, if we want to use a device name in the form of /dev/disk/by-uuid/, we can just explore this directory with
an ls -l command:

ls -l /dev/disk/by-uuid/

And we'd see the unique identifiers and what block devices they actually point to. Next we could pick what
we need from here, and use the /dev/disk/by-uuid/ format, followed by the unique identifier as the first field
in one of our fstab lines.
Visit www.kodekloud.com to discover more.
Now, let's look at how to adjust filesystem and mount options.
We saw how the

lsblk

command gives us a nice overview of what is mounted where. It's short and to the point. But, sometimes, we'll also want
the details; details like: what filesystem is mounted here? And what are the mount options for this filesystem? Of course,
we'll first need to understand what mount options are. But we'll get to that in a few seconds.

First, let's look at another command that shows everything that is mounted on this system:

findmnt

We can spot our /dev/vdb1 partition with the xfs filesystem, mounted in the /mybackups directory.

This command will output many lines. Usually, double of what we can see here. We've truncated output in this
example so that it can fit our screen. It can look messy and hard to read, especially if we don't need all this
information. findmnt shows all mount points, including some virtual stuff. For example, proc is a virtual
filesystem, mounted in the /proc directory. This filesystem does not exist on some storage device
somewhere, only in the computer's Random Access Memory. In our situation, we don't need to see all this
virtual stuff. So, we can tell findmnt to only show us "real" filesystems. For example, we can say: show us only
xfs and ext4 filesystems that are mounted. To do so, we use the "-t" type command line option:

findmnt -t xfs,ext4

Now that looks much cleaner and is easy to read.

An important note to make is that findmnt only shows us filesystems that are currently mounted. Those that
exist on some partition somewhere, but are not yet mounted, won't show up in this output.
Notice the OPTIONS column here. This is where findmnt shows us the mount options. What are mount options? It will be
easier to understand if we look at the effects of these. For example, we see /dev/vdb1 is mounted with an option called rw.
That means we can read and write to this filesystem. We see it's true since we can create a new file on it:

sudo touch /mybackups/testfile2


But what if we would use an option called ro instead of rw? That would make the filesystem read-only.

We can mount a filesystem with specific options with the -o switch passed to the mount command. For
example:

sudo mount -o ro /dev/vdb2 /mnt

Let's check out our mounted filesystems again:

findmnt -t xfs,ext4

We see our mount option was applied. Now we can also test it. Let's try to create a file:

sudo touch /mnt/testfile

As expected, this action is denied since our filesystem is mounted in read-only mode.
So, we can draw the general conclusion that mount options, in a way, tell the filesystem how to behave, what rules it should
follow, while it is mounted.

Let's unmount this:

sudo umount /mnt


And see how we can use multiple mount options, like ro, noexec and nosuid. noexec makes it impossible to
launch a program stored on this filesystem. nosuid disables the SUID permission that can allow programs to
run with root privileges, without needing the sudo command. Sometimes options like noexec and nosuid are
used to improve security on filesystems that should not have programs on them. For example, these are used
on Android phones, on the storage area that should contain photos, videos, and similar data. Now even if we
plug the phone into an infected computer and a virus would try to replicate on this filesystem, it would have
no effect. The virus would be able to write infected files. But they would be saved on a noexec and nosuid
mount point. This way, they cannot execute and do any harm. They're just a bunch of files sitting there, doing
nothing.

To mount our filesystem with these options, we run this command:

sudo mount -o ro,noexec,nosuid /dev/vdb2 /mnt

These options should be separated only by commas, without any spaces between them.

Once again, we can confirm our options were applied:

findmnt -t ext4,xfs

Now let's say we want to write to this filesystem. We should change the ro mount option to rw.

If we try this:

sudo mount -o rw,noexec,nosuid /dev/vdb2 /mnt

It wouldn't work. mount complains this is already mounted. But we can tell it to remount the filesystem with
the new options, with a command like this:
sudo mount -o remount,rw,noexec,nosuid /dev/vdb2 /mnt

Basically, we just threw "remount" in there as yet another mount option.


The options we used up to this point are called Filesystem-Independent. That's because they can be used on almost any
filesystem.

Most of these mount options are described in the manual of the "mount" command:

man mount
But other options are filesystem-specific. Some options only work on xfs, others only on ext4, and so on. To see filesystem-
specific mount options, we need to consult the manual of that filesystem. For example, in

man xfs

We can scroll down and reach the MOUNT OPTIONS section, where we can see these described:
For example, let's say we want to use the allocsize option described here. We'll first unmount /dev/vdb1 since
it's already mounted in our scenario.

sudo umount /dev/vdb1

It's not recommended to use mount -o remount, in this case. With filesystem-independent options it's ok.
With filesystem-dependent options, these might not be applied if the filesystem is already mounted.

Here's how we would use the allocsize option and set it to 32 kilobytes:

sudo mount -o allocsize=32K /dev/vdb1 /mybackups

To see filesystem-specific mount options for ext4, use the "man ext4" command.
Up to this point, we mounted our filesystems manually. But let's remember that these can also be mounted automatically
at boot time. How do we apply mount options there?

Let's open up /etc/fstab:

sudo vim /etc/fstab


Remember how in a previous lesson we said we'll use the default mount options on this line?

/dev/vdb1 /mybackups xfs defaults 0 2

If we'd want to use custom options, such as ro and noexec we explored earlier, we can rewrite the line like
this:

/dev/vdb1 /mybackups xfs ro,noexec 0 2

If we save the file and reboot:

sudo systemctl reboot

we can see our mount options were automatically applied at boot time:

findmnt -t xfs,ext4
Visit www.kodekloud.com to discover more.
In our previous lessons we learned how to work with local filesystems and local block storage devices. Basically, data that
exists on the same system we're logged into. But, sometimes, we'll need data that resides on a different system.

In this lesson, we'll learn how to use remote filesystems. And in one of our next lessons, how to use remote block devices.
Let's go through these scenarios in order.
There are multiple protocols that allow us to access remote filesystems.
A protocol is, in a way, a "language" that a client and server use to communicate. They need to "speak" the same language
so that they can understand each other, negotiate a connection, and transfer data.
Linux supports many protocols for the purpose of sharing filesystems. But for sharing data between two Linux computers,
the Network Filesystem protocol is most often used. This is also abbreviated as NFS.

Let's see how we can use NFS on Linux.


There are two parts to using NFS: setting up the NFS server, and then setting up the NFS clients. On the server, we
configure everything to allow it to share a filesystem with the world. And on the client-sides, we configure everything to
allow them to mount the remote filesystem from that server.

Let's jump right into that first part, setting up the NFS server. Then we'll explore how to use NFS on the client side as well.
On the server side, the one we're sharing data from, we'll first need to install the "nfs-kernel-server" package. So we'll
enter this command:

sudo apt install nfs-kernel-server

Next, we need to tell our NFS server what filesystems, or directories we want to share. And this is fairly straightforward. All
we need to do is edit the "/etc/exports" file.

If we open it up for editing, with:

sudo vim /etc/exports

we'll see a few commented lines which are very helpful.


These show us a few examples, teaching us how to define our shares. From the highlighted line, we can see the basic
structure:

First, we type the path to the directory we want to share. It can be something like "/srv/homes", or "/nfs/disk1/backups", or
whatever we might want to use here. Usually, we'll want to point at the path where a filesystem is mounted. But we can
also point to a subdirectory belonging to that filesystem.
Next, we need to choose who, or what can access this share. In other words, we specify which NFS clients
should be allowed to use this remote filesystem. In this example, we can see "hostname1". That's the
hostname of a computer on our network. These hostnames can potentially take more complex forms as well,
such as "example.com" or "server1.example.com". Instead of hostnames, we can also use IP addresses here.
So we could replace "hostname1" in this highlighted line, with something like "10.0.0.9", or whatever the IP
address of the computer accessing this share would be. And there's a third option as well. If we want to allow
an entire range of IPs to access this share, we can use the CIDR notation, which we learned about in our
networking lessons. For example, if we want to make this share accessible to any computer with an IP address
between 10.0.16.1 all the way to 10.0.16.255, we can type "10.0.16.0/24" here.

Finally, between parentheses, we include the export options. In this case, these are: "rw, sync, and
no_subtree_check".
What's interesting in this highlighted line is that we see we can enumerate multiple hosts that should be able to access this
share. And we can do it on the same line. In this case, we can notice the "/srv/homes" directory is also shared with a second
host, called "hostname2". And this includes slightly different export options. Notably, this uses the "ro" option instead of
"rw". Now let's explore what these export options mean.

The important options to know about are rw, ro, sync, async, no_subtree_check, and no_root_squash.
rw stands for read/write. This allows clients mounting this share remotely, to both read, and write to this
filesystem, or subdirectory.

ro stands for read-only. This allows clients to read, but not write.

sync is for synchronous writes of data. That is, NFS ensures that data written by a client is actually saved on
the storage device, before reporting that the operation is successful.

async is for asynchronous writes. A client can issue multiple write requests. And NFS can report that the
writes are completed, even before they are actually saved on the storage device. async allows clients to do
things faster, but it does not guarantee that all changes are actually stored. So we can end up in a situation
where NFS reports to the client that the write is complete, even though it isn't actually committed to the
storage device yet. Then, if the server reboots unexpectedly, that write operation could be lost. sync is
slower, but it guarantees that when a write is reported as being complete, it is actually stored on the device.

no_subtree_check disables subtree checking. Normally, with NFS, we'd want to export an entire filesystem.
For example, say we have a backup disk on the NFS server. And we mount it in "/nfs/disk1". Then we'll export
"/nfs/disk1" as an NFS share. But we can also export a subdirectory here, like "/nfs/disk1/backups/databases".
If subtree checking would be active, the NFS server would always check if a requested file resides in this
specific subdirectory that was exported. Which can cause a few issues when files are renamed, or moved to
another subdirectory. Subtree checking is basically intended as an extra security check. But it has some
drawbacks and a few performance implications. That's why, by default, "no_subtree_check" will be assigned
to all NFS exports, even if we don't specify this option ourselves.

no_root_squash allows a root user on the NFS client to also have root privileges on a remote NFS share they
mount. By default, NFS squashes root privileges. That is, if we are root on the client, and we mount an NFS
share, then we won't be able to read and write as root on that remote filesystem. We'll be squashed, or
downgraded, to regular user privileges instead. All read and write requests as "root" on the client will be
mapped to a user called "nobody" on the server. This is a mild security measure that ensures root users on
NFS clients cannot do anything they want on remote filesystems. If we want to deactivate this behavior, then
we can use the "no_root_squash" option. This way, root on the client, will also be treated as the root user on
the remote filesystem.
To see information about the /etc/exports file, including other mount options and syntax specifications, we can use this
command:

man exports
Now that we understand how to define an NFS share on our server, let's say we want to add our own line. What would we
write if we wanted to share the "/etc" directory with a computer with the IP address: "127.0.0.1"? And if we also wanted
that computer to only be able to read files from this share, but deny write access to it.

First, we'd open up the exports file with:


sudo vim /etc/exports

Then we'd write a line like this, at the end:

/etc 127.0.0.1(ro)
After modifying the exports file, we must also inform our NFS server about these changes. So it can actually start sharing
what we defined here. To apply these changes, we use this command:

sudo exportfs -r

The -r option stands for re-export. You can think of it as doing a refresh, based on the "/etc/exports" file. It will share what
is defined in that file, but also unshare whatever has been removed.

To see the current exported NFS shares, we can type:

sudo exportfs -v

-v stands for verbose, that is, detailed output. This will show us what shares are currently active, but also the
export options associated with each one.
Notice that in our case, we only used "ro" as our option. But there are many more options active for this share. That's
because the NFS server will assign some options by default, even if they are not explicitly defined by the administrators in
the /etc/exports file.

To wrap up this section, here are a few extra tips:


We can also use wildcards in the hostname field, in our "/etc/exports" file. For example, to share this directory
with all computers that have a hostname ending in ".example.com" we could add a line like this:

/etc *.example.com(ro,sync,no_subtree_check)

The asterisk "*" is the wildcard sign that matches anything that may come before .example.com. So clients
with hostnames like "server1.example.com" or "mail.example.com" will be allowed to access this share.

And to share with any client, we can simply use the asterisk sign alone:

/etc *(ro,sync,no_subtree_check)

It's important to remember to not add any extra spaces between the asterisk and the mount options. Same
rule applies when you use a hostname, or an IP address. The field where we define the clients allowed to
access this share should be glued together with the open parenthesis that includes the options.
Now let's move on to the client side. Things are much simpler here.

First, we need to ensure we have the utilities necessary to mount NFS shares. These utilities are included in the "nfs-
common" package, which we can install with:

sudo apt install nfs-common


Now that we already shared the /etc directory on our server, how would we mount it from a remote client?
Well, we already learned about the mount command in previous lessons. All we need to do is adapt it to this
use case.

The general syntax to mount a remote NFS share is:

sudo mount IP_or_hostname_of_server:/path/to/remote/directory /path/to/local/directory


07-08-2024

For our specific use-case, we want three things here:

1. Mount something from an NFS server with IP address "127.0.0.1".


2. Mount the remote "/etc" filesystem, or directory, from that server.
3. And, finally, mount this into our local directory, "/mnt".
47
Which means our command should be:

sudo mount 127.0.0.1:/etc /mnt

Basically, all we did was add an extra part to our regular mount command, where we specified the location of our remote
server, followed by the ":" colon symbol.
If we'd want to use a hostname, like "server1" instead, we would type something like:

sudo mount server1:/etc /mnt

The rest of the commands are similar to what we've learned. For example, to unmount this NFS share, all we
need to do is type:

sudo umount /mnt

Finally, if we'd want this NFS share to be auto-mounted at boot time, we'd go through familiar steps. First,
we'd open the /etc/fstab file for editing:

sudo vim /etc/fstab

Then we'd add a line like this:

127.0.0.1:/etc /mnt nfs defaults 0 0


The only differences compared to what we did in fstab in earlier lessons, is that the source of this mount specifies an IP
address and a remote directory. And in the field where we normally specify "ext4" or "xfs" as the filesystem type, we
entered "nfs".

The rest is similar. Where we specify mount options, we can write "defaults" to let NFS autoconfigure its mount options. Of
course, if you need specific mount options you can just enumerate them here just like we did in our previous lessons.
Multiple values separated by commas. Finally, since this directory is not local, we don't need to check the
filesystem for errors. So our last two fields should be "0 0".

This covers the essentials of creating NFS servers, and mounting remote filesystems on NFS clients. Let's
jump into our next lesson.
Visit www.kodekloud.com to discover more.
In this lesson we'll explore how to mount block devices from remote servers.
In previous lessons we learned about special files that can reference our storage devices. For example, /dev/sda, or
/dev/vda points to our entire first storage device on the system. And /dev/sda1, or /dev/vda1 points to the first partition on
the first disk. These are block special files that reference block devices.
Network block devices do a similar thing. But instead of pointing to storage devices plugged into our system, they point to
storage devices on an entirely different computer.
For example, imagine this setup:

We have one server with two storage devices referenced by the /dev/vda and /dev/vdb block special files. Then we have
another server with two storage devices, referenced by similar files. With Network Block Devices, we can essentially add a
third disk to our first server.
NBD utilities create a special file called /dev/nbd0. As far as applications are concerned, /dev/nbd0 looks and
behaves just like any other block device. But whenever something reads or writes to this device, the requests
are sent to the real block device on Server 2. So any write request to /dev/nbd0 is redirected to /dev/vdb on
Server 2.

But things will be even easier to understand if we look at this in practice. So let's see how we can actually use
Network Block Devices, also called NBD when abbreviated.
When we deal with NBD, we have two locations that need to be configured:

The NBD server, containing the real block device that will be shared through the network.
The NBD client, where we will attach the remote remote block device we want to add to our system.
Visit www.kodekloud.com to discover more.
In this lesson we'll look at how we can monitor the performance of our storage devices.
Just like CPU and RAM, storage devices have their limits. If a process is overusing the CPU, keeping it at 100%, the system
will slow down considerably. And the same can happen if a storage device is overused.
We have utilities like "top", and "htop", to look at how processes are using the CPU and our RAM. But what about storage
devices? How can we see what's reading and writing to these?
There are many tools that can monitor read/write operations. Let's look at a few of the simpler ones. There's a package
called "sysstat" that contains a few such tools. We can install it with the usual command:

sudo apt install sysstat

This contains a few other system statistic utilities, but the ones we're interested in are called "iostat" and "pidstat".
iostat's name comes from I/O statistics. And I/O is short for Input/Output. Because we can input data to a storage device,
when we're writing to it. And the storage device outputs data when we read from it.

pidstat is short for "process ID statistics". Each process on Linux has an unique IDentifier. So this tool will show us statistics
for each process on our system, alongside their ID numbers.
We'll see why both "iostat" and "pidstat" are needed for our purposes.
Let's start with the simplest way to see a summary of how storage devices are used. We can use the iostat command with
no arguments. If we type:

iostat

at the command line, we'll see output like this:


Now let's figure out what this output shows us.

First of all, this displays historical usage of our storage devices. That is, it tells us how the devices were used
since the system was booted up, not necessarily how they're used currently. And this is a bit tricky to
understand when looking at the per second fields here. But let's simplify.

Those fields show an average of usage, divided by the total time since the system was started.
For example, let's say the system was booted 3 seconds ago. In the first second, something read from the disk at 100
kilobytes per second. In the next second, something read from the disk at 200 kilobytes per second. In the third second,
nothing was read from the disk anymore, so usage was at zero kilobytes per second. If we add 100+200+0 and then divide
by the total number of seconds since the system was booted, three seconds, we get an average usage of 300/3 = 100
kilobytes per second. So iostat might show us "100" in the "kB_read/s" field.
If another second passes, and nothing reads from the disk, iostat would show something different the next time we run it.
Now we have 100+200+0+0 divided by 4 total seconds since the system is up. So we might see "75" in the "kB_read/s" field.
The fields which are calculated this way are "tps", "kB_read/s", "kB_wrtn/s", and "kB_dscd/s".
When we're looking at what's using the storage devices we're interested in the first three fields:

tps stands for transfers per second. We can think of it in terms of "How many times did the system tell this device to read
or write something?".

kB_read/s refers to kilobytes read per second.


kB_wrtn/s shows kilobytes written per second.

The fields "kB_read" and "kB_wrtn" are much more straightforward. They simply show the total number of
kilobytes read or written since the system booted. They don't depend on time, so there's no average to
calculate.
Something to note here is that what iostat shows us does not always reflect the exact amount of data per second written
or read by a process. For example, a process can request to write 1 kilobyte every second. But iostat might show the device
is used at thousands of kilobytes per second, during that time. That's because iostat tries to figure out these numbers by
looking at what the device is reporting, not the process. And the device might work with larger blocks of data. Even if a
process sends just 1kB, the device might report it updated a much larger block. So estimates can be inflated here,
especially for small transfers. But they become more accurate for larger transfers.
A storage device can be overused, or stressed in two different ways:

Something reads or writes to it very often.


Or something reads or writes to it at very high speed. Large volumes of data are transferred.
If something uses the devices often, we'll see a high tps number. And for large volumes of data, we'll see large numbers
under the kB_read/s and kB_wrtn/s fields.

So even if something is writing very little data to the device, say 1 kilobyte at a time, if it does this hundreds of times per
second, it's still stressful for the device. Potentially reaching its limits. Because each storage device can process a certain
maximum amount of requests per second. If one process does a high number of transfers per second, there's not much left
for other processes to use. So one process can starve many other processes by overusing a storage device.

The same goes for large transfers. If the device can only write 2 gigabytes of data per second, and a process is
already writing at 1.98 gigabytes, there's very little left for other processes. So they will wait for a long, long
time until they can finish writing their data. Which can lead to a very slow, unresponsive system.

And this is where "iostat" and "pidstat" can team up and help us investigate scenarios where processes
overuse our storage devices. Let's put on our detective hats and see how by looking at some commands in
action.
Visit www.kodekloud.com to discover more.
Visit www.kodekloud.com to discover more.

You might also like