Document
Document
Version 0.9
Lars Wirzenius
Joanna Oja
Stephen Stafford
Alex Weeks
Table of contents
1. Acknowledgments -1
1.1. Joanna’s Acknowledgments -1
1.2. Stephen’s acknowledgments -1
1.3. Alex’s acknowledgments -2
2. Revision History-2
3. Source and pre-formatted versions available -2
4. Typographical conventions-3
Chapter 1. Introduction -4
2.3.1. init-9
2.3.3.- syslog-9
2.3.9. Mail-11
5.4.1. NFS-29
5.4.2. CIFS-29
5.5. Floppies-29
5.6. CD-ROMS-30
5.7. Tapes-31
5.9. Partitions- 32
5.10. filesystems-36
7.2.3. quotas-63
Chapter 9. Init-70
18.2. IRC-102
18.2.1. colours-103
18.2.7. CTCP’s-104
A.1. PREAMBLE-105
A.5. MODIFICATIONS-107
Index- draft-115
A-115
B-115
C-115
D-116
E-117
F-117
G-120
H-120
I-120
K-120
L-121
M-121
N-121
O-121
P-121
R-122
S-122
T-122
V-122
W-122
“ two things are infinite, the universe and human stupidity, and I’m not sure
about former.” Albert Einstein
1. Acknowledgments
1.1. Joanna’s acknowledgments
Many people have helped me with this book, directly or indirectly. I would
like to especially thank Matt Welsh for inspiration and LDP leadership,
Andy Oram for getting me to work again with much−valued feedback,
Olaf Kirch for showing me that it can be done, and Adam Richter at
Yggdrasil and others for showing me that other people can find it
interesting as well. Stephen Tweedie, H. Peter Anvin, Remy Card,
Theodore Ts’o, and Stephen Tweedie have let me borrow their work (and
thus make the book look thicker and much more impressive): a
comparison between the xia and ext2 filesystems, the device list and a
description of the ext2 filesystem. These aren’t part of the book any more.
I am most grateful for this, and very apologetic for the earlier versions
that sometimes lacked proper attribution. In addition, I would like to thank
Mark Komarinski for sending his material in 1993 and the many system
administration columns in Linux Journal. They are quite informative and
inspirational.
I would like to thank Lars and Joanna for their hard work on the guide.
In a guide like this one there are likely to be at least some minor
inaccuracies. And there are almost certainly going to be sections that
become out of date from time to time. If you notice any of this then
please let me know by sending me an email to:
[email protected]. I will take virtually any form of input (diffs,
just plain text, html, whatever), I am in no way above allowing others to
help me maintain such a large text as this.
Many thanks to Helen Topping Shaw for getting the red pen out and
making the text far better than it would otherwise have been. Also thanks
are due just for being wonderful.
1.3. Alex’s Acknowledgments
I would like to thank Lars, Joanna, and Stephen for all the great work that
they have done on this document over the years. I only hope that my
contribution will be worthy of continuing the work they started.
2.Revision History
Revision History
The source code and other machine readable formats of this book can be
found on the Internet via anonymous FTP at the Linux Documentation Project
home page http://www.tldp.org/, or at the home page of this book at
http://www.draxeman/sag.html. This book is available in at least it’s SGML
source, as well as, HTML and PDF formats. Other formats may be available.
4.Typographical Conventions
I will add to this section as things come up whilst editing. If you notice
anything that should be added then please let me know.
Chapter 1. Introduction
"In the beginning, the file was without form, and void; and emptiness was
upon the face of the bits. And the Fingers of the Author moved upon the face
of the keyboard. And the Author said, Let there be words, and there were
words."
System administration covers all the things that you have to do to keep a
computer system in usable order. It includes things like backing up files (and
restoring them if necessary), installing new programs, creating accounts for
users (and deleting them when no longer needed), making certain that the
filesystem is not corrupted, and so on. If a computer were, say, a house,
system administration would be called maintenance, and would include
cleaning, fixing broken windows, and other such things.
The structure of this manual is such that many of the chapters should be
usable independently, so if you need information about backups, for
example, you can read just that chapter. However, this manual is first and
foremost a tutorial and can be read sequentially or as a whole.
While this manual is targeted at Linux, a general principle has been that it
should be useful with other UNIX based operating systems as well.
Unfortunately, since there is so much variance between different versions of
UNIX in general, and in system administration in particular, there is little
hope to cover all variants. Even covering all possibilities for Linux is difficult,
due to the nature of its development.
In trying to describe how things work, rather than just listing five easy steps''
for each task, there is much information here that is not necessary for
everyone, but those parts are marked as such and can be skipped if you use
a preconfigured system. Reading everything will, naturally, increase your
understanding of the system and should make using and administering it
more productive.
Understanding is the key to success with Linux. This book could just provide
recipes, but what would you do when confronted by a problem this book had
no recipe for? If the book can provide understanding, then recipes are not
required. The answers will be self evident.
Like all other Linux related development, the work to write this manual was
done on a volunteer basis: I did it because I thought it might be fun and
because I felt it should be done. However, like all volunteer work, there is a
limit to how much time, knowledge and experience people have. This means
that the manual is not necessarily as good as it would be if a wizard had
been paid handsomely to write it and had spent millennia to perfect it. Be
warned.
One particular point where corners have been cut is that many things that
are already well documented in other freely available manuals are not
always covered here. This applies especially to program specific
documentation, such as all the details of using mkfs. Only the purpose of the
program and as much of its usage as is necessary for the purposes of this
manual is described. For further information, consult these other manuals.
Usually, all of the referred to documentation is part of the full Linux
documentation set.
Many people feel that Linux should really be called GNU/Linux. This is
because Linux is only the kernel, not the applications that run on it. Most of
the basic command line utilities were written by the Free Software
Foundation while developing their GNU operating system. Among those
utilities are some of the most basic commands like cp, mv lsof, and dd.
In a nutshell, what happened was, the FSF started developing GNU by writing
things like compliers, C libraries, and basic command line utilities before the
kernel. Linus Torvalds, started Linux by writing the Linux kernel first and
using applications written for GNU.
I do not feel that this is the proper forum to debate what name people should
use when referring to Linux. I mention it here, because I feel it is important
to understand the relationship between GNU and Linux, and to also explain
why some Linux is sometimes referred to as GNU/Linux. The document will
be simply referring to it as Linux.
GNU's side of the issue is discussed on their website:
1.2. Trademarks
Red Hat is a trademark of Red Hat, Inc., in the United States and other
countries.
"God saw everything that he had made, and saw that it was very good. " −−
Bible King James Version. Genesis 1:31
This chapter gives an overview of a Linux system. First, the major services
provided by the operating system are described. Then, the programs that
implement these services are described with a considerable lack of detail.
The purpose of this chapter is to give an understanding of the system as a
whole, so that each part is described in detail elsewhere.
It keeps track of files on the disk, starts programs and runs them
concurrently, assigns memory and other resources to various processes,
receives packets from and sends packets to the network, and so on. The
kernel does very little by itself, but it provides tools with which all services
can be built. It also prevents anyone from accessing the hardware directly,
forcing everyone to use the tools it provides. This way the kernel provides
some protection for users from each other. The tools provided by the kernel
are used via system calls. See manual page section 2 for more information
on these.
The system programs use the tools provided by the kernel to implement the
various services required from an operating system. System programs, and
all other programs, run `on top of the kernel', in what is called the user
mode. The difference between system and application programs is one of
intent: applications are intended for getting useful things done (or for
playing, if it happens to be a game), whereas system programs are needed
to get the system working. A word processor is an application; mount is a
system program. The difference is often somewhat blurry, however, and is
important only to compulsive categorizers.
Probably the most important parts of the kernel (nothing else works without
them) are memory management and process management. Memory
management takes care of assigning memory areas and swap space areas to
processes, parts of the kernel, and for the buffer cache. Process
management creates processes, and implements multitasking by switching
the active process on the processor.
At the lowest level, the kernel contains a hardware device driver for each
kind of hardware it supports. Since the world is full of different kinds of
hardware, the number of hardware device drivers is large. There are often
many otherwise similar pieces of hardware that differ in how they are
controlled by software. The similarities make it possible to have general
classes of drivers that support similar operations; each member of the class
has the same interface to the rest of the kernel but differs in what it needs to
do to implement them. For example, all disk drivers look alike to the rest of
the kernel, i.e., they all have operations like initialize the drive', read sector
N', and write sector N'.
Some software services provided by the kernel itself have similar properties,
and can therefore be abstracted into classes. For example, the various
network protocols have been abstracted into one programming interface, the
BSD socket library. Another example is the virtual filesystem (VFS) layer that
abstracts the filesystem operations away from their implementation. Each
filesystem type provides an implementation of each filesystem operation.
When some entity tries to use a filesystem, the request goes via the VFS,
which routes the request to the proper filesystem driver.
This section describes some of the more important UNIX services, but
without much detail. They are described more thoroughly in later chapters.
2.3.1. init
The single most important service in a UNIX system is provided by init init is
started as the first process of every UNIX system, as the last thing the kernel
does when it boots. When init starts, it continues the boot process by doing
various startup chores (checking and mounting filesystems, starting
daemons, etc).
The exact list of things that init does depends on which flavor it is; there are
several to choose from. init usually provides the concept of single user mode,
in which no one can log in and root uses a shell at the console; the usual
mode is called multiuser mode. Some flavors generalize this as run levels;
single and multiuser modes are considered to be two run levels, and there
can be additional ones as well, for example, to run X on the console.
Linux allows for up to 10 runlevels, 0−9, but usually only some of these are
defined by default. Runlevel 0 is defined as ``system halt''. Runlevel 1 is
defined as ``single user mode''. Runlevel 3 is defined as "multi user"
because it is the runlevel that the system boot into under normal day to day
conditions. Runlevel 5 is typically the same as 3 except that a GUI gets
started also. Runlevel 6 is defined as ``system reboot''. Other runlevels are
dependent on how your particular distribution has defined them, and they
vary significantly between distributions. Looking at the contents of
/etc/inittab usually will give some hint what the predefined runlevels are and
what they have been defined as.
In normal operation, init makes sure getty is working (to allow users to log in)
and to adopt orphan processes (processes whose parent has died; in UNIX all
processes must be in a single tree, so orphans must be adopted).
When the system is shut down, it is init that is in charge of killing all other
processes, unmounting all filesystems and stopping the processor, along with
anything else it has been configured to do.
Logins from terminals (via serial lines) and the console (when not running X)
are provided by the getty program. init starts a separate instance of getty for
each terminal upon which logins are to be allowed. getty reads the username
and runs the loginprogram, which reads the password. If the username and
password are correct, login runs the shell. When the shell terminates, i.e., the
user logs out, or when login terminated because the username and password
didn't match, init notices this and starts a new instance of getty. The kernel
has no notion of logins, this is all handled by the system programs.
2.3.3. Syslog
The kernel and many system programs produce error, warning, and other
messages. It is often important that these messages can be viewed later,
even much later, so they should be written to a file. The program doing this
is syslog . It can be configured to sort the messages to different files
according to writer or degree of importance. For example, kernel messages
are often directed to a separate file from the others, since kernel messages
are often more important and need to be read regularly to spot
problems.Chapter 15 will provide more on this.
The cron service is set up to do this. Each user can have a crontab file, where
she lists the commands she wishes to execute and the times they should be
executed. The cron daemon takes care of starting the commands when
specified.
The at service is similar to cron, but it is once only: the command is executed
at the given time, but it is not repeated.
We will go more into this later. See the manual pages cron(1), crontab(1),
crontab(5), at(1) and atd(8) for more in depth information.Chapter 13 will
cover this.
UNIX and Linux don't incorporate the user interface into the kernel; instead,
they let it be implemented by user level programs. This applies for both text
mode and graphical environments.
This arrangement makes the system more flexible, but has the disadvantage
that it is simple to implement a different user interface for each program,
making the system harder to learn.
The graphical environment primarily used with Linux is called the X Window
System (X for short). X also does not implement a user interface; it only
implements a window system, i.e., tools with which a graphical user interface
can be implemented. Some popular window managers are: fvwm , icewm ,
blackbox , and windowmaker . There are also two popular desktop managers,
KDE and Gnome.
2.3.6. Networking
Networking is the act of connecting two or more computers so that they can
communicate with each other. The actual methods of connecting and
communicating are slightly complicated, but the end result is very useful.
UNIX operating systems have many networking features. Most basic services
(filesystems, printing, backups, etc) can be done over the network. This can
make system administration easier, since it allows centralized
administration, while still reaping in the benefits of microcomputing and
distributed computing, such as lower costs and better fault tolerance.
However, this book merely glances at networking; see the Linux Network
Administrators' Guide http://www.tldp.org/LDP/nag2/index.html for more
information, including a basic description of how networks operate.
Network logins work a little differently than normal logins. For each person
logging in via the network there is a separate virtual network connection,
and there can be any number of these depending on the available
bandwidth. It is therefore not possible to run a separate getty for each
possible virtual connection. There are also several different ways to log in via
a network, telnet and ssh being the major ones in TCP/IP networks.
These days many Linux system administrators consider telnet and rlogin to
be insecure and prefer ssh, the ``secure shell'', which encrypts traffic going
over the network, thereby making it far less likely that the malicious can
``sniff'' your connection and gain sensitive data like usernames and
passwords. It is highly recommended you use ssh rather than telnet or rlogin.
Network logins have, instead of a herd of gettys, a single daemon per way of
logging in (telnet and ssh have separate daemons) that listens for all
incoming login attempts. When it notices one, it starts a new instance of
itself to handle that single attempt; the original instance continues to listen
for other attempts. The new instance works similarly to getty.
One of the more useful things that can be done with networking services is
sharing files via a network file system. Depending on your network this could
be done over the Network File System (NFS), or over the Common Internet
File System (CIFS). NFS is typically a 'UNIX' based service. In Linux, NFS is
supported by the kernel. CIFS however is not. In Linux, CIFS is supported by
Samba http://www.samba.org.
With a network file system any file operations done by a program on one
machine are sent over the network to another computer. This fools the
program to think that all the files on the other computer are actually on the
computer the program is running on. This makes information sharing
extremely simple, since it requires no modifications to programs.
Electronic mail is the most popularly used method for communicating via
computer. An electronic letter is stored in a file using a special format, and
special mail programs are used to send and read the letters.
Each user has an incoming mailbox (a file in the special format), where all
new mail is stored. When someone sends mail, the mail program locates the
receiver's mailbox and appends the letter to the mailbox file. If the receiver's
mailbox is in another machine, the letter is sent to the other machine, which
delivers it to the mailbox as it best sees fit.
The mail system consists of many programs. The delivery of mail to local or
remote mailboxes is done by one program (the mail transfer agent (MTA) ,
e.g., sendmail or postfix ), while the programs users use are many and varied
(mail user agent (MUA) , e.g., pine , or evolution . The mailboxes are usually
stored in /var/spool/mail until the user's MUA retrieves them.
For more information on setting up and running mail services you can read
the Mail Administrator HOWTO at
http://www.tldp.org/HOWTO/Mail−Administrator−HOWTO.html, or visit the
sendmail or postfix's website. http://www.sendmail.org/, or
http://www.postfix.org/ .
2.3.10. Printing
Only one person can use a printer at one time, but it is uneconomical not to
share printers between users. The printer is therefore managed by software
that implements a print queue: all print jobs are put into a queue and
whenever the printer is done with one job, the next one is sent to it
automatically. This relieves the users from organizing the print queue and
fighting over control of the printer. Instead, they form a new queue at the
printer, waiting for their printouts, since no one ever seems to be able to get
the queue software to know exactly when anyone's printout is really finished.
This is a great boost to intra−office social relations.
The print queue software also spools the printouts on disk, i.e., the text is
kept in a file while the job is in the queue. This allows an application program
to spit out the print jobs quickly to the print queue software; the application
does not have to wait until the job is actually printed to continue. This is
really convenient, since it allows one to print out one version, and not have
to wait for it to be printed before one can make a completely revised new
version.You can refer to the Printing−HOWTO located at
http://www.tldp.org/HOWTO/Printing−HOWTO/index.html for more help in
setting up printers.
The filesystem is divided into many parts; usually along the lines of a root
filesystem with /bin , /lib , /etc , /dev , and a few others; a /usr filesystem with
programs and unchanging data; /var filesystem with changing data (such as
log files); and a /home for everyone's personal files. Depending on the
hardware configuration and the decisions of the system administrator, the
division can be different; it can even be all in one filesystem.
Chapter 3 describes the filesystem layout in some little detail; the Filesystem
Hierarchy Standard . covers it in somewhat more detail. This can be found on
the web at: http://www.pathname.com/fhs/
" Two days later, there was Pooh, sitting on his branch, dangling his legs, and
there, beside him, were four pots of honey..." (A.A. Milne)
This chapter describes the important parts of a standard Linux directory tree,
based on the Filesystem Hierarchy Standard . It outlines the normal way of
breaking the directory tree into separate filesystems with different purposes
and gives the motivation behind this particular split. Not all Linux
distributions follow this standard slavishly, but it is generic enough to give
you an overview.
3.1. Background
The full directory tree is intended to be breakable into smaller parts, each
capable of being on its own disk or partition, to accommodate to disk size
limits and to ease backup and other system administration tasks. The major
parts are the root (/ ), /usr , /var , and /home filesystems (see Figure 3−1).
Each part has a different purpose. The directory tree has been designed so
that it works well in a network of Linux machines which may share some
parts of the filesystems over a read−only device (e.g., a CD−ROM), or over
the network with NFS. Probably the most important parts of the kernel
(nothing else works without them) are memory management and process
management. Memory management takes care of assigning memory areas
and swap space areas to processes, parts of the kernel, and for the buffer
cache. Process management creates processes, and implements multitasking
by switching the active process on the processor.
At the lowest level, the kernel contains a hardware device driver for each
kind of hardware it supports. Since the world is full of different kinds of
hardware, the number of hardware device drivers is large. There are often
many otherwise similar pieces of hardware that differ in how they are
controlled by software. The similarities make it possible to have general
classes of drivers that support similar operations; each member of the class
has the same interface to the rest of the kernel but differs in what it needs to
do to implement them. For example, all disk drivers look alike to the rest of
the kernel, i.e., they all have operations like `initialize the drive', `read sector
N', and `write sector N'.
Some software services provided by the kernel itself have similar properties,
and can therefore be abstracted into classes. For example, the various
network protocols have been abstracted into one programming interface, the
BSD socket library. Another example is the virtual filesystem (VFS) layer that
abstracts the filesystem operations away from their implementation. Each
filesystem type provides an implementation of each filesystem operation.
When some entity tries to use a filesystem, the request goes via the VFS,
which routes the request to the proper filesystem driver.
This section describes some of the more important UNIX services, but
without much detail. They are described more thoroughly in later chapters.
2.3.1. init
The single most important service in a UNIX system is provided by init init is
started as the first process of every UNIX system, as the last thing the kernel
does when it boots. When init starts, it continues the boot process by doing
various startup chores (checking and mounting filesystems, starting
daemons, etc).
The exact list of things that init does depends on which flavor it is; there are
several to choose from. init usually provides the concept of single user mode,
in which no one can log in and root uses a shell at the console; the usual
mode is called multiuser mode. Some flavors generalize this as run levels;
single and multiuser modes are considered to be two run levels, and there
can be additional ones as well, for example, to run X on the console.
Linux allows for up to 10 runlevels, 0−9, but usually only some of these are
defined by default. Runlevel 0 is defined as ``system halt''. Runlevel 1 is
defined as ``single user mode''. Runlevel 3 is defined as "multi user"
because it is the runlevel that the system boot into under normal day to day
conditions. Runlevel 5 is typically the same as 3 except that a GUI gets
started also. Runlevel 6 is defined as ``system reboot''. Other runlevels are
dependent on how your particular distribution has defined them, and they
vary significantly between distributions. Looking at the contents of
/etc/inittab usually will give some hint what the predefined runlevels are and
what they have been defined as.
In normal operation, init makes sure getty is working (to allow users to log in)
and to adopt orphan processes (processes whose parent has died; in UNIX all
processes must be in a single tree, so orphans must be adopted).
When the system is shut down, it is init that is in charge of killing all other
processes, unmounting all filesystems and stopping the processor, along with
anything else it has been configured to do.
2.3.2. Logins from terminals
Logins from terminals (via serial lines) and the console (when not running X)
are provided by the getty program. init starts a separate instance of getty for
each terminal upon which logins are to be allowed. getty reads the username
and runs the loginprogram, which reads the password. If the username and
password are correct, login runs the shell. When the shell terminates, i.e., the
user logs out, or when login terminated because the username and password
didn't match, init notices this and starts a new instance of getty. The kernel
has no notion of logins, this is all handled by the system programs.
2.3.3. Syslog
The kernel and many system programs produce error, warning, and other
messages. It is often important that these messages can be viewed later,
even much later, so they should be written to a file. The program doing this
is syslog . It can be configured to sort the messages to different files
according to writer or degree of importance. For example, kernel messages
are often directed to a separate file from the others, since kernel messages
are often more important and need to be read regularly to spot problems.
/var/tmp) from old files, to keep the disks from filling up, since not all
programs clean up after themselves correctly.
The cron service is set up to do this. Each user can have a crontab file, where
she lists the commands she wishes to execute and the times they should be
executed. The cron daemon takes care of starting the commands when
specified.
The at service is similar to cron, but it is once only: the command is executed
at the given time, but it is not repeated.
We will go more into this later. See the manual pages cron(1), crontab(1),
crontab(5), at(1) and atd(8) for more in depth information.
Chapter 13 will cover this.
UNIX and Linux don't incorporate the user interface into the kernel; instead,
they let it be implemented by user level programs. This applies for both text
mode and graphical environments.
This arrangement makes the system more flexible, but has the disadvantage
that it is simple to implement a different user interface for each program,
making the system harder to learn.
The graphical environment primarily used with Linux is called the X Window
System (X for short). X also does not implement a user interface; it only
implements a window system, i.e., tools with which a graphical user interface
can be implemented. Some popular window managers are: fvwm , icewm ,
blackbox , and windowmaker . There are also two popular desktop managers,
KDE and Gnome.
2.3.6. Networking
Networking is the act of connecting two or more computers so that they can
communicate with each other. The actual methods of connecting and
communicating are slightly complicated, but the end result is very useful.
UNIX operating systems have many networking features. Most basic services
(filesystems, printing, backups, etc) can be done over the network. This can
make system administration easier, since it allows centralized
administration, while still reaping in the benefits of microcomputing and
distributed computing, such as lower costs and better fault tolerance.
However, this book merely glances at networking; see the Linux Network
Administrators' Guide http://www.tldp.org/LDP/nag2/index.html for more
information, including a basic description of how networks operate.
Network logins work a little differently than normal logins. For each person
logging in via the network there is a separate virtual network connection,
and there can be any number of these depending on the available
bandwidth. It is therefore not possible to run a separate getty for each
possible virtual connection. There are also several different ways to log in via
a network, telnet and ssh being the major ones in TCP/IP networks.
These days many Linux system administrators consider telnet and rlogin to
be insecure and prefer ssh, the ``secure shell'', which encrypts traffic going
over the network, thereby making it far less likely that the malicious can
``sniff'' your connection and gain sensitive data like usernames and
passwords. It is highly recommended you use ssh rather than telnet or rlogin.
Network logins have, instead of a herd of gettys, a single daemon per way of
logging in (telnet and ssh have separate daemons) that listens for all
incoming login attempts. When it notices one, it starts a new instance of
itself to handle that single attempt; the original instance continues to listen
for other attempts. The new instance works similarly to getty.
One of the more useful things that can be done with networking services is
sharing files via a network file system. Depending on your network this could
be done over the Network File System (NFS), or over the Common Internet
File System (CIFS). NFS is typically a 'UNIX' based service. In Linux, NFS is
supported by the kernel. CIFS however is not. In Linux, CIFS is supported by
Samba http://www.samba.org.
With a network file system any file operations done by a program on one
machine are sent over the network to another computer. This fools the
program to think that all the files on the other computer are actually on the
computer the program is running on. This makes information sharing
extremely simple, since it requires no modifications to programs.
2.3.9. Mail
Electronic mail is the most popularly used method for communicating via
computer. An electronic letter is stored in a file using a special format, and
special mail programs are used to send and read the letters.
Each user has an incoming mailbox (a file in the special format), where all
new mail is stored. When someone sends mail, the mail program locates the
receiver's mailbox and appends the letter to the mailbox file. If the receiver's
mailbox is in another machine, the letter is sent to the other machine, which
delivers it to the mailbox as it best sees fit.
The mail system consists of many programs. The delivery of mail to local or
remote mailboxes is done by one program (the mail transfer agent (MTA) ,
e.g., sendmail or postfix ), while the programs users use are many and varied
(mail user agent (MUA) , e.g., pine , or evolution . The mailboxes are usually
stored in /var/spool/mail until the user's MUA retrieves them.
For more information on setting up and running mail services you can read
the Mail Administrator HOWTO at
http://www.tldp.org/HOWTO/Mail−Administrator−HOWTO.html, or visit the
sendmail or postfix's website. http://www.sendmail.org/, or
http://www.postfix.org/ .
2.3.10. Printing
Only one person can use a printer at one time, but it is uneconomical not to
share printers between users. The printer is therefore managed by software
that implements a print queue: all print jobs are put into a queue and
whenever the printer is done with one job, the next one is sent to it
automatically. This relieves the users from organizing the print queue and
fighting over control of the printer. Instead, they form a new queue at the
printer, waiting for their printouts, since no one ever seems to be able to get
the queue software to know exactly when anyone's printout is really finished.
This is a great boost to intra−office social relations.
The print queue software also spools the printouts on disk, i.e., the text is
kept in a file while the job is in the queue. This allows an application program
to spit out the print jobs quickly to the print queue software; the application
does not have to wait until the job is actually printed to continue. This is
really convenient, since it allows one to print out one version, and not have
to wait for it to be printed before one can make a completely revised new
version.
The filesystem is divided into many parts; usually along the lines of a root
filesystem with /bin , /lib , /etc , /dev , and a few others; a /usr filesystem with
programs and unchanging data; /var filesystem with changing data (such as
log files); and a /home for everyone's personal files. Depending on the
hardware configuration and the decisions of the system administrator, the
division can be different; it can even be all in one filesystem.
Chapter 3 describes the filesystem layout in some little detail; the Filesystem
Hierarchy Standard . covers it in somewhat more detail. This can be found on
the web at: http://www.pathname.com/fhs/
" Two days later, there was Pooh, sitting on his branch, dangling his legs, and
there, beside him, were four pots of honey..." (A.A. Milne)
This chapter describes the important parts of a standard Linux directory tree,
based on the Filesystem Hierarchy Standard . It outlines the normal way of
breaking the directory tree into separate filesystems with different purposes
and gives the motivation behind this particular split. Not all Linux
distributions follow this standard slavishly, but it is generic enough to give
you an overview.
3.1. Background
This chapter does not explain all files in detail. The intention is not to
describe every file, but to give an overview of the system from a filesystem
point of view. Further information on each file is available elsewhere in this
manual or in the Linux manual pages.
The full directory tree is intended to be breakable into smaller parts, each
capable of being on its own disk or partition, to accommodate to disk size
limits and to ease backup and other system administration tasks. The major
parts are the root (/ ), /usr , /var , and /home filesystems (see Figure 3−1).
Each part has a different purpose. The directory tree has been designed so
that it works well in a network of Linux machines which may share some
parts of the filesystems over a read−only device (e.g., a CD−ROM), or over
the network with NFS.
The root filesystem is specific for each machine (it is generally stored on a
local disk, although it could be a ramdisk or network drive as well) and
contains the files that are necessary for booting the system up, and to bring
it up to such a state that the other filesystems may be mounted. The
contents of the root filesystem will therefore be sufficient for the single user
state. It will also contain tools for fixing a broken system, and for recovering
lost files from backups.
The /usr filesystem contains all commands, libraries, manual pages, and
other unchanging files needed during normal operation. No files in /usr
should be specific for any given machine, nor should they be modified during
normal use. This allows the files to be shared over the network, which can be
cost−effective since it saves disk space (there can easily be hundreds of
megabytes, increasingly multiple gigabytes in /usr). It can make
administration easier (only the master /usr needs to be changed when
updating an application, not each machine separately) to have /usr network
mounted. Even if the filesystem is on a local disk, it could be mounted
read−only, to lessen the chance of filesystem corruption during a crash.
The /var filesystem contains files that change, such as spool directories (for
mail, news, printers, etc), log files, formatted manual pages, and temporary
files. Traditionally everything in /var has been somewhere below /usr , but
that made it impossible to mount /usr read−only.
The /home filesystem contains the users' home directories, i.e., all the real
data on the system. Separating home directories to their own directory tree
or filesystem makes backups easier; the other parts often do not have to be
backed up, or at least not as often as they seldom change. A big /home
might have to be broken across several filesystems, which requires adding
an extra naming level below /home, for example /home/students and
/home/staff.
Although the different parts have been called filesystems above, there is no
requirement that they actually be on separate filesystems. They could easily
be kept in a single one if the system is a small single−user system and the
user wants to keep things simple. The directory tree might also be divided
into filesystems differently, depending on how large the disks are, and how
space is allocated for various purposes. The important part, though, is that
all the standard names work; even if, say, /var and /usr are actually on the
same partition, the names /usr/lib/libc.a and /var/log/messages must work,
for example by moving files below /var into /usr/var, and making /var a
symlink to /usr/var.
The Unix filesystem structure groups files according to purpose, i.e., all
commands are in one place, all data files in another, documentation in a
third, and so on. An alternative would be to group files files according to the
program they belong to, i.e., all Emacs files would be in one directory, all TeX
in another, and so on. The problem with the latter approach is that it makes
it difficult to share files (the program directory often contains both static and
sharable and changing and non−sharable files), and sometimes to even find
the files (e.g., manual pages in a huge number of places, and making the
manual page programs find all of them is a maintenance nightmare).
The root filesystem should generally be small, since it contains very critical
files and a small, infrequently modified filesystem has a better chance of not
getting corrupted. A corrupted root filesystem will generally mean that the
system becomes unbootable except with special measures (e.g., from a
floppy), so you don't want to risk it.
The root directory generally doesn't contain any files, except perhaps on
older systems where the standard boot image for the system, usually
called /vmlinuz was kept there. (Most distributions have moved those files
the the /boot directory. Otherwise, all files are kept in subdirectories under
the root filesystem:
/bin
Commands needed during bootup that might be used by normal users
(probably after bootup).
/sbin
Like /bin, but the commands are not intended for normal users, although
they may use them if necessary and allowed. /sbin is not usually in the
default path of normal users, but will be in root's default path.
/etc
/root
The home directory for user root. This is usually not accessible to other users
on the system
/lib
/lib/modules
Loadable kernel modules, especially those that are needed to boot the
system when recovering from disasters (e.g., network and filesystem
drivers).
/dev
Device files. These are special files that help the user interface with the
various devices on the system.
/tmp
/boot
Files used by the bootstrap loader, e.g., LILO or GRUB. Kernel images are
often kept here instead of in the root directory. If there are many kernel
images, the directory can easily grow rather big, and it might be better to
keep it in a separate filesystem. Another reason would be to make sure the
kernel images are within the first 1024 cylinders of an IDE disk. This 1024
cylinder limit is no longer true in most cases. With modern BIOSes and later
versions of LILO (the LInux LOader) the 1024 cylinder limit can be passed
with logical block addressing (LBA). See the lilo manual page for more
details.
/mnt
Mount points for the other filesystems. Although /proc does not reside on any
disk in reality it is still mentioned here. See the section about /proc later in
the chapter.
The /etc maintains a lot of files. Some of them are described below. For
others, you should determine which program they belong to and read the
manual page for that program. Many networking configuration files are in
/etc as well, and are described in the Networking Administrators' Guide.
/etc/passwd
The user database, with fields giving the username, real name, home
directory, and other information about each user. The format is documented
in the passwd manual page.
/etc/shadow
/etc/fdprm
Floppy disk parameter table. Describes what different floppy disk formats
look like. Used by setfdprm . See the setfdprm manual page for more
information.
/etc/fstab
/etc/group
Similar to /etc/passwd, but describes groups instead of users. See the group
manual page in section 5 for more information.
/etc/inittab
/etc/issue
Output by getty before the login prompt. Usually contains a short description
or welcoming message to the system. The contents are up to the system
administrator.
/etc/magic
The configuration file for file. Contains the descriptions of various file formats
based on which file guesses the type of the file. See the magic and file
manual pages for more information.
/etc/motd
/etc/mtab
/etc/login.defs
Configuration file for the login command. The login.defs file usually has a
manual page in section 5.
/etc/printcap
/etc/securetty
Identifies secure terminals, i.e., the terminals from which root is allowed to
log in. Typically only the virtual consoles are listed, so that it becomes
impossible (or at least harder) to gain superuser privileges by breaking into a
system over a modem or a network. Do not allow root logins over a network.
Prefer to log in as an unprivileged user and use su or sudo to gain root
privileges.
/etc/shells
Lists trusted shells. The chsh command allows users to change their login
shell only to shells listed in this file. ftpd, is the server process that provides
FTP services for a machine, will check that the user's shell is listed in
/etc/shells and will not let people log in unless the shell is listed there.
/etc/termcap
The /dev directory contains the special device files for all the devices. The
device files are created during installation, and later with the /dev/MAKEDEV
script. The /dev/MAKEDEV.local is a script written by the system
administrator that creates local−only device files or links (i.e. those that are
not part of the standard MAKEDEV, such as device files for some
non−standard device driver).
If you think there are other devices which should be included here but aren't
then let me know. I will try to include them in the next revision.
/dev/dsp
Digital Signal Processor. Basically this forms the interface between software
which produces sound and your soundcard. It is a character device on major
node 14 and minor 3.
/dev/fd0
The first floppy drive. If you are lucky enough to have several drives then
they will be numbered sequentially. It is a character device on major node 2
and minor 0.
/dev/fb0
/dev/hda
/dev/hda is the master IDE drive on the primary IDE controller. /dev/hdb the
slave drive on the primary controller. /dev/hdc , and /dev/hdd are the master
and slave devices on the secondary controller respectively. Each disk is
divided into partitions. Partitions 1−4 are primary partitions and partitions 5
and above are logical partitions inside extended partitions. Therefore the
device file which references each partition is made up of several parts. For
example /dev/hdc9 references partition 9 (a logical partition inside an
extended partition type) on the master IDE drive on the secondary IDE
controller. The major and minor node numbers are somewhat complex. For
the first IDE controller all partitions are block devices on major node 3. The
master drive hda is at minor 0 and the slave drive hdb is at minor 64. For
each partition inside the drive add the partition number to the minor minor
node number for the drive. For example /dev/hdb5 is major 3, minor 69 (64 +
5 = 69). Drives on the secondary interface are handled the same way, but
with major node 22.
/dev/ht0
The first IDE tape drive. Subsequent drives are numbered ht1 etc. They are
character devices on major node 37 and start at minor node 0 for ht0 1 for
ht1 etc.
/dev/js0
The first analogue joystick. Subsequent joysticks are numbered js1, js2 etc.
Digital joysticks are called djs0, djs1 and so on. They are character devices
on major node 15. The analogue joysticks start at minor node 0 and go up to
127 (more than enough for even the most fanatic gamer). Digital joysticks
start at minor node 128.
/dev/lp0
The first parallel printer device. Subsequent printers are numbered lp1, lp2
etc. They are character devices on major mode 6 and minor nodes starting at
0 and numbered sequentially.
/dev/loop0
The first loopback device. Loopback devices are used for mounting
filesystems which are not located on other block devices such as disks. For
example if you wish to mount an iso9660 CD ROM image without burning it
to CD then you need to use a loopback device to do so. This is usually
transparent to the user and is handled by the mount command. Refer to the
manual pages for mount and losetup. The loopback devices are block
devices on major node 7 and with minor nodes starting at 0 and numbered
sequentially.
/dev/md0
/dev/mixer
This is part of the OSS (Open Sound System) driver. Refer to the OSS
documentation at http://www.opensound.com for more details. It is a
character device on major node 14, minor node 0.
/dev/null
The bit bucket. A black hole where you can send data for it never to be seen
again. Anything sent to /dev/null will disappear. This can be useful if, for
example, you wish to run a command but not have any feedback appear on
the terminal. It is a character device on major node 1 and minor node 3.
/dev/psaux
The PS/2 mouse port. This is a character device on major node 10, minor
node 1.
/dev/pda
Parallel port IDE disks. These are named similarly to disks on the internal IDE
controllers
(/dev/hd*). They are block devices on major node 45. Minor nodes need
slightly more explanation here. The first device is /dev/pda and it is on minor
node 0. Partitions on this device are found by adding the partition number to
the minor number for the device. Each device is limited to 15 partitions each
rather than 63 (the limit for internal IDE disks). /dev/pdb minor nodes start at
16, /dev/pdc at 32 and /dev/pdd at 48. So for example the minor node
number for /dev/pdc6 would be 38 (32 + 6 = 38). This scheme limits you to 4
parallel disks of 15 partitions each.
/dev/pcd0
Parallel port CD ROM drives. These are numbered from 0 onwards. All are
block devices on major node 46. /dev/pcd0 is on minor node 0 with
subsequent drives being on minor nodes 1, 2, 3 etc.
/dev/pt0
Parallel port tape devices. Tapes do not have partitions so these are just
numbered sequentially. They are character devices on major node 96. The
minor node numbers start from 0 for /dev/pt0, 1 for /dev/pt1, and so on.
/dev/parport0
The raw parallel ports. Most devices which are attached to parallel ports
have their own drivers. This is a device to access the port directly. It is a
character device on major node 99 with minor node 0. Subsequent devices
after the first are numbered sequentially incrementing the minor node.
/dev/random or /dev/urandom
These are kernel random number generators. /dev/random is a
non−deterministic generator which means that the value of the next number
cannot be guessed from the preceding ones. It uses the entropy of the
system hardware to generate numbers. When it has no more entropy to use
then it must wait until it has collected more before it will allow any more
numbers to be read from it.
/dev/urandom works similarly. Initially it also uses the entropy of the system
hardware, but when there is no more entropy to use it will continue to return
numbers using a pseudo random number generating formula. This is
considered to be less secure for vital purposes such as cryptographic key
pair generation. If security is your overriding concern then use /dev/random,
if speed is more important then /dev/urandom works fine. They are character
devices on major node 1 with minor nodes 8 for /dev/random and 9 for
/dev/urandom.
/dev/sda
The first SCSI drive on the first SCSI bus. The following drives are named
similar to IDE drives. /dev/sdb is the second SCSI drive, /dev/sdc is the third
SCSI drive, and so forth.
/dev/ttyS0
The first serial port. Many times this it the port used to connect an external
modem to your system.
/dev/zero
This is a simple way of getting many 0s. Every time you read from this
device it will return 0. This can be useful sometimes, for example when you
want a file of fixed length but don't really care what it contains. It is a
character device on major node 1 and minor node 5.
The /usr filesystem is often large, since all programs are installed there. All
files in /usr usually come from a Linux distribution; locally installed programs
and other stuff goes below /usr/local. This makes it possible to update the
system from a new version of the distribution, or even a completely new
distribution, without having to install all programs again. Some of the
subdirectories of /usr are listed below (some of the less important directories
have been dropped; see the FSSTND for more information).
/usr/X11R6.
The X Window System, all files. To simplify the development and installation
of X, the X files have not been integrated into the rest of the system. There is
a directory tree below /usr/X11R6 similar to that below /usr itself.
/usr/bin.
/usr/sbin
/usr/include
/usr/lib
/usr/lib.
/usr/local
The place for locally installed software and other files. Distributions may not
install anything in here. It is reserved solely for the use of the local
administrator. This way he can be absolutely certain that no updates or
upgrades to his distribution will overwrite any extra software he has installed
locally.
The /var contains data that is changed when the system is running normally.
It is specific for each system, i.e. not shared over the network with other
computers.
/var/cache/man
A cache for man pages that are formatted on demand. The source for manual
pages is usually stored in /usr/share/man/man?/ (where ? is the manual
section. See the manual page for man in section 7); some manual pages
might come with a pre−formatted version, which might be stored in
/usr/share/man/cat* . Other manual pages need to be formatted when they
are first viewed; the formatted version is then stored in /var/cache/man so
that the next person to view the same page won't have to wait for it to be
formatted.
/var/games
Any variable data belonging to games in /usr should be placed here. This is in
case /usr is mounted read only.
/var/lib
/var/local
Variable data for programs that are installed in /usr/local (i.e., programs that
have been installed by the system administrator). Note that even locally
installed programs should use the other /var directories if they are
appropriate, e.g., /var/lock.
/var/lock
Lock files. Many programs follow a convention to create a lock file in /var/lock
to indicate that they are using a particular device or file. Other programs will
notice the lock file and won't attempt to use the device or file.
/var/log
/var/mail
This is the FHS approved location for user mailbox files. Depending on how
far your distribution has gone towards FHS compliance, these files may still
be held in /var/spool/mail.
/var/run
Files that contain information about the system that is valid until the system
is next booted. For example, /var/run/utmp contains information about
people currently logged in.
/var/spool
Directories for news, printer queues, and other queued work. Each different
spool has its own subdirectory below /var/spool, e.g., the news spool is in
/var/spool/news . Note that some installations which are not fully compliant
with the latest version of the FHS may have user mailboxes under
/var/spool/mail.
/var/tmp
Temporary files that are large or that need to exist for a longer time than
what is allowed for /tmp .
(Although the system administrator might not allow very old files in /var/tmp
either.)
/proc/1
/proc/cpuinfo
Information about the processor, such as its type, make, model, and
performance.
/proc/devices
/proc/dma
/proc/filesystems
Filesystems configured into the kernel.
/proc/interrupts
Shows which interrupts are in use, and how many of each there have been.
/proc/ioports
/proc/kcore
An image of the physical memory of the system. This is exactly the same
size as your physical memory, but does not really take up that much
memory; it is generated on the fly as programs access it. (Remember: unless
you copy it elsewhere, nothing under /proc takes up any disk space at all.)
/proc/kmsg
/proc/ksyms
/proc/loadavg
The `load average' of the system; three meaningless indicators of how much
work the system has to do at the moment.
/proc/meminfo
/proc/modules
/proc/net
/proc/self
/proc/stat
Various statistics about the system, such as the number of page faults since
the system was booted.
/proc/uptime
/proc/version
Note that while the above files tend to be easily readable text files, they can
sometimes be formatted in a way that is not easily digestible. There are
many commands that do little more than read the above files and format
them for easier understanding. For example, the freeprogram reads
/proc/meminfo converts the amounts given in bytes to kilobytes (and adds a
little more information, as well).
This chapter gives an overview of what a device file is, and how to create
one. The canonical list of device files is
/usr/src/linux/Documentation/devices.txt if you have the Linux kernel source
code installed on your system. The devices listed here are correct as of
kernel version 2.6.8.
Most device files will already be created and will be there ready to use after
you install your Linux system. If by some chance you need to create one
which is not provided then you should first try to use the MAKEDEV script.
This script is usually located in /dev/MAKEDEV but might also have a copy (or
a symbolic link) in /sbin/MAKEDEV. If it turns out not to be in your path then
you will need to specify the path to it explicitly.
This will create the device file /dev/ttyS0 with major node 4 and minor node
64 as a character device with access permissions 0660 with owner root and
group dialout. ttyS0 is a serial port. The major and minor node numbers are
numbers understood by the kernel. The kernel refers to hardware devices as
numbers, this would be very difficult for us to remember, so we use
filenames. Access permissions of 0660 means read and write permission for
the owner (root in this case) and read and write permission for members of
the group (dialout in this case) with no access for anyone else.
MAKEDEV is the preferred way of creating device files which are not present.
However sometimes the MAKEDEV script will not know about the device file
you wish to create. This is where the mknod command comes in. In order to
use mknod you need to know the major and minor node numbers for the
device you wish to create. The devices.txt file in the kernel source
documentation is the canonical source of this information.
To take an example, let us suppose that our version of the MAKEDEV script
does not know how to create the
/dev/ttyS0 device file. We need to use mknod to create it. We know from
looking at the devices.txt that it should be a character device with major
number 4 and minor number 64. So we now know all we need to create the
file.
# mknod /dev/ttyS0 c 4 64
As you can see, many more steps are required to create the file. In this
example you can see the process required however. It is unlikely in the
extreme that the ttyS0 file would not be provided by the MAKEDEV script, but
it suffices to illustrate the point.
lspci
To be added
lsdev
To be added
4.1.5. The lsusb command
lsusb
To be added
lsraid
To be added
hdparm
To be added
To be added
4.2.1. lsmod
lsmod
To be added
4.2.2. insmod
insmod
To be added
4.2.3. depmod
depmod
To be added
4.2.4. rmmod
rmmod
To be added
4.2.5. modprobe
Modprobe-
To be added
"On a clear disk you can seek forever. When you install or upgrade your
system, you need to do a fair amount of work on your disks. You have to
make filesystems on your disks so that files can be stored on them and
reserve space for the different parts of your system.
This chapter explains all these initial activities. Usually, once you get your
system set up, you won't have to go through the work again, except for using
floppies. You'll need to come back to this chapter if you add a new disk or
want to fine−tune your disk usage.
Format your disk. This does various things to prepare it for use, such as
checking for bad sectors. (Formatting is nowadays not necessary for most
hard disks.)
Partition a hard disk, if you want to use it for several activities that aren't
supposed to interfere with one another. One reason for partitioning is to store
different operating systems on the same disk. Another reason is to keep user
files separate from system files, which simplifies back−ups and helps protect
the system files from corruption.
Make a filesystem (of a suitable type) on each disk or partition. The disk
means nothing to Linux until you make a filesystem; then files can be
created and accessed on it.
Since devices show up as files in the filesystem (in the /dev directory), it is
easy to see just what device files exist, using ls or another suitable
command. In the output of ls −l, the first column contains the type of the file
and its permission .
The first character in the first column, i.e., `c' in crw−rw−rw− above, tells an
informed user the type of the file, in this case a character device. For
ordinary files, the first character is `−', for directories it is `d', and for block
devices `b'; see the ls man page for further information.
Note that usually all device files exist even though the device itself might be
not be installed. So just because you have a file /dev/sda, it doesn't mean
that you really do have an SCSI hard disk. Having all the device files makes
the installation programs simpler, and makes it easier to add new hardware
(there is no need to find out the correct parameters for and create the device
files for the new device).
The processor (CPU) and the actual disk communicate through a disk
controller . This relieves the rest of the computer from knowing how to use
the drive, since the controllers for different types of disks can be made to
use the same interface towards the rest of the computer. Therefore, the
computer can say just ``hey disk, give me what I want'', instead of a long
and complex series of electric signals to move the head to the proper
location and waiting for the correct position to come under the head and
doing all the other unpleasant stuff necessary. (In reality, the interface to the
controller is still complex, but much less so than it would otherwise be.) The
controller may also do other things, such as caching, or automatic bad sector
replacement.
The above is usually all one needs to understand about the hardware. There
are also other things, such as the motor that rotates the platters and moves
the heads, and the electronics that control the operation of the mechanical
parts, but they are mostly not relevant for understanding the working
principles of a hard disk.
The surfaces are usually divided into concentric rings, called tracks, and
these in turn are divided into sectors. This division is used to specify
locations on the hard disk and to allocate disk space to files. To find a given
place on the hard disk, one might say ``surface 3, track 5, sector 7''. Usually
the number of sectors is the same for all tracks, but some hard disks put
more sectors in outer tracks (all sectors are of the same physical size, so
more of them fit in the longer outer tracks). Typically, a sector will hold 512
bytes of data. The disk itself can't handle smaller amounts of data than one
sector.
Each surface is divided into tracks (and sectors) in the same way. This means
that when the head for one surface is on a track, the heads for the other
surfaces are also on the corresponding tracks. All the corresponding tracks
taken together are called a cylinder. It takes time to move the heads from
one track (cylinder) to another, so by placing the data that is often accessed
together (say, a file) so that it is within one cylinder, it is not necessary to
move the heads to read all of it. This improves performance. It is not always
possible to place files like this; files that are stored in several places on the
disk are called fragmented.
The number of surfaces (or heads, which is the same thing), cylinders, and
sectors vary a lot; the specification of the number of each is called the
geometry of a hard disk. The geometry is usually stored in a special,
battery−powered memory location called the CMOS RAM , from where the
operating system can fetch it during bootup or driver initialization.
The translation is only a problem for IDE disks. SCSI disks use a sequential
sector number (i.e., the controller translates a sequential sector number to a
head, cylinder, and sector triplet), and a completely different method for the
CPU to talk with the controller, so they are insulated from the problem. Note,
however, that the computer might not know the real geometry of an SCSI
disk either.
Since Linux often will not know the real geometry of a disk, its filesystems
don't even try to keep files within a single cylinder. Instead, it tries to assign
sequentially numbered sectors to files, which almost always gives similar
performance. The issue is further complicated by on−controller caches, and
automatic prefetches done by the controller.
Each hard disk is represented by a separate device file. There can (usually)
be only two or four IDE hard disks. These are known as /dev/hda, /dev/hdb,
/dev/hdc, and /dev/hdd, respectively. SCSI hard disks are known as
/dev/sda, /dev/sdb, and so on. Similar naming conventions exist for other
hard disk types; see Chapter 4 for more information. Note that the device
files for the hard disks give access to the entire disk, with no regard to
partitions (which will be discussed below), and it's easy to mess up the
partitions or the data in them if you aren't careful. The disks' device files are
usually used only to get access to the master boot record (which will also be
discussed below).
Two networking protocols commonly used in a SAN are fibre channel and
iSCSI . A fibre channel network is very fast and is not burdened by the other
network traffic in a company's LAN. However, it's very expensive. Fibre
channel cards cost around $1000.00 USD each. They also require special
fibre channel switches.
Similar to a SAN, a NAS need to make use of a protocol to allow access to it's
disks. With a NAS this is either CIFS/Samba , or NFS.
Traditionally CIFS was used with Microsoft Windows networks, and NFS was
used with UNIX & Linux networks. However, with Samba, Linux machines can
also make use of CIFS shares.
Does this mean that your Windows 2003 server or your Linux box are NAS
servers because they provide access to shared drives over your network?
Yes, they are. You could also purchase a NAS device from a number of
manufacturers. These devices are specifically designed to provide high speed
access to data. More To Be Added
5.4.1. NFS
To be added
5.4.2. CIFS
To be added
5.5. Floppies
Like a hard disk, a floppy is divided into tracks and sectors (and the two
corresponding tracks on either side of a floppy form a cylinder), but there are
many fewer of them than on a hard disk.
A floppy drive can usually use several different types of disks; for example, a
3.5 inch drive can use both 720 KB and 1.44 MB disks. Since the drive has to
operate a bit differently and the operating system must know how big the
disk is, there are many device files for floppy drives, one per combination of
drive and disk type. Therefore, /dev/fd0H1440 is the first floppy drive (fd0),
which must be a 3.5 inch drive, using a 3.5 inch, high density disk (H) of size
1440 KB (1440), i.e., a normal 3.5 inch HD floppy.
The names for floppy drives are complex, however, and Linux therefore has a
special floppy device type that automatically detects the type of the disk in
the drive. It works by trying to read the first sector of a newly inserted floppy
using different floppy types until it finds the correct one. This naturally
requires that the floppy is formatted first. The automatic devices are called
/dev/fd0, /dev/fd1, and so on.
The parameters the automatic device uses to access a disk can also be set
using the program setfdprm . This can be useful if you need to use disks that
do not follow any usual floppy sizes, e.g., if they have an unusual number of
sectors, or if the autodetecting for some reason fails and the proper device
file is missing.
Linux can handle many nonstandard floppy disk formats in addition to all the
standard ones. Some of these require using special formatting programs.
We'll skip these disk types for now, but in the mean time you can examine
the /etc/fdprm file. It specifies the settings that setfdprm recognizes.
The operating system must know when a disk has been changed in a floppy
drive, for example, in order to avoid using cached data from the previous
disk. Unfortunately, the signal line that is used for this is sometimes broken,
and worse, this won't always be noticeable when using the drive from within
MS−DOS. If you are experiencing weird problems using floppies, this might
be the reason. The only way to correct it is to repair the floppy drive.
5.6. CD−ROMs
A CD−ROM drive uses an optically read, plastic coated disk. The information
is recorded on the surface of the disk in small `holes' aligned along a spiral
from the center to the edge. The drive directs a laser beam along the spiral
to read the disk. When the laser hits a hole, the laser is reflected in one way;
when it hits smooth surface, it is reflected in another way. This makes it easy
to code bits, and therefore information. The rest is easy, mere mechanics.
CD−ROM drives are slow compared to hard disks. Whereas a typical hard
disk will have an average seek time less than 15 milliseconds, a fast
CD−ROM drive can use tenths of a second for seeks. The actual data transfer
rate is fairly high at hundreds of kilobytes per second. The slowness means
that CD−ROM drives are not as pleasant to use as hard disks (some Linux
distributions provide `live' filesystems on CD−ROMs, making it unnecessary
to copy the files to the hard disk, making installation easier and saving a lot
of hard disk space), although it is still possible. For installing new software,
CD−ROMs are very good, since maximum speed is not essential during
installation.
There are several ways to arrange data on a CD−ROM. The most popular one
is specified by the international standard ISO 9660 . This standard specifies a
very minimal filesystem, which is even more crude than the one MS−DOS
uses. On the other hand, it is so minimal that every operating system should
be able to map it to its native system.
For normal UNIX use, the ISO 9660 filesystem is not usable, so an extension
to the standard has been developed, called the Rock Ridge extension. Rock
Ridge allows longer filenames, symbolic links, and a lot of other goodies,
making a CD−ROM look more or less like any contemporary UNIX filesystem.
Even better, a Rock Ridge filesystem is still a valid ISO 9660 filesystem,
making it usable by non−UNIX systems as well. Linux supports both ISO
9660 and the Rock Ridge extensions; the extensions are recognized and used
automatically.
The filesystem is only half the battle, however. Most CD−ROMs contain data
that requires a special program to access, and most of these programs do
not run under Linux (except, possibly, under dosemu, the Linux MS−DOS
emulator, or wine, the Windows emulator.
Ironically perhaps, wine actually stands for ``Wine Is Not an Emulator''. Wine,
more strictly, is an API (Application Program Interface) replacement. Please
see the wine documentation at http://www.winehq.com for more information.
A CD−ROM drive is accessed via the corresponding device file. There are
several ways to connect a CD−ROM drive to the computer: via SCSI, via a
sound card, or via EIDE. The hardware hacking needed to do this is outside
the scope of this book, but the type of connection decides the device file.
5.7. Tapes
A tape drive uses a tape, similar to cassettes used for music. A tape is serial
in nature, which means that in order to get to any given part of it, you first
have to go through all the parts in between. A disk can be accessed
randomly, i.e., you can jump directly to any place on the disk. The serial
access of tapes makes them slow.
On the other hand, tapes are relatively cheap to make, since they do not
need to be fast. They can also easily be made quite long, and can therefore
contain a large amount of data. This makes tapes very suitable for things like
archiving and backups, which do not require large speeds, but benefit from
low costs and large storage capacities.
5.8. Formatting
Formatting is the process of writing marks on the magnetic media that are
used to mark tracks and sectors. Before a disk is formatted, its magnetic
surface is a complete mess of magnetic signals. When it is formatted, some
order is brought into the chaos by essentially drawing lines where the tracks
go, and where they are divided into sectors. The actual details are not quite
exactly like this, but that is irrelevant. What is important is that a disk cannot
be used unless it has been formatted.
For IDE and some SCSI disks the formatting is actually done at the factory
and doesn't need to be repeated; hence most people rarely need to worry
about it. In fact, formatting a hard disk can cause it to work less well, for
example because a disk might need to be formatted in some very special
way to allow automatic bad sector replacement to work.
During formatting one might encounter bad spots on the disk, called bad
blocks or bad sectors. These are sometimes handled by the drive itself, but
even then, if more of them develop, something needs to be done to avoid
using those parts of the disk. The logic to do this is built into the filesystem;
how to add the information into the filesystem is described below.
Alternatively, one might create a small partition that covers just the bad part
of the disk; this approach might be a good idea if the bad spot is very large,
since filesystems can sometimes have trouble with very large bad areas.
Floppies are formatted with fdformat . The floppy device file to use is given
as the parameter. For example, the following command would format a high
density, 3.5 inch floppy in the first floppy drrive.
Note that if you want to use an autodetecting device (e.g., /dev/fd0), you
must set the parameters of the device with setfdprm first. It is usually more
convenient to choose the correct device file that matches the type of the
floppy. Note that it is unwise to format floppies to contain more information
than what they are designed for. fdformatalso validate the floppy, i.e., check
it for bad blocks. It will try a bad block several times (you can usually hear
this, the drive noise changes dramatically). If the floppy is only marginally
bad (due to dirt on the read/write head, some errors are false signals),
fdformat won't complain, but a real error will abort the validation process.
The kernel will print log messages for each I/O error it finds; these will go to
the console or, if syslog is being used, to the file /var/log/messages. fdformat
itself won't tell where the error is (one usually doesn't care, floppies are
cheap enough that a bad one is automatically thrown away).
The badblocks command can be used to search any disk or partition for bad
blocks (including a floppy). It does not format the disk, so it can be used to
check even existing filesystems. The example below checks a 3.5 inch floppy
with two bad blocks.
badblocks outputs the block numbers of the bad blocks it finds. Most
filesystems can avoid such bad blocks. They maintain a list of known bad
blocks, which is initialized when the filesystem is made, and can be modified
later. The initial search for bad blocks can be done by the mkfs command
(which initializes the filesystem), but later checks should be done with
badblocks and the new blocks should be added with fsck. We'll describe mkfs
and fsck later.
Many modern disks automatically notice bad blocks, and attempt to fix them
by using a special, reserved good block instead. This is invisible to the
operating system. This feature should be documented in the disk's manual, if
you're curious if it is happening. Even such disks can fail, if the number of
bad blocks grows too large, although chances are that by then the disk will
be so rotten as to be unusable.
5.9. Partitions
A hard disk can be divided into several partitions. Each partition functions as
if it were a separate hard disk.
The idea is that if you have one hard disk, and want to have, say, two
operating systems on it, you can divide the disk into two partitions. Each
operating system uses its partition as it wishes and doesn't touch the other
ones. This way the two operating systems can co−exist peacefully on the
same hard disk. Without partitions one would have to buy a hard disk for
each operating system.
Floppies are not usually partitioned. There is no technical reason against this,
but since they're so small, partitions would be useful only very rarely.
CD−ROMs are usually also not partitioned, since it's easier to use them as
one big disk, and there is seldom a need to have several operating systems
on one.
The partitioning scheme is not built into the hardware, or even into the BIOS.
It is only a convention that many operating systems follow. Not all operating
systems do follow it, but they are the exceptions. Some operating systems
support partitions, but they occupy one partition on the hard disk, and use
their internal partitioning method within that partition. The latter type exists
peacefully with other operating systems (including Linux), and does not
require any special measures, but an operating system that doesn't support
partitions cannot co−exist on the same disk with any other operating
system.
The original partitioning scheme for PC hard disks allowed only four
partitions. This quickly turned out to be too little in real life, partly because
some people want more than four operating systems (Linux, MS−DOS, OS/2,
Minix, FreeBSD, NetBSD, or Windows/NT, to name a few), but primarily
because sometimes it is a good idea to have several partitions for one
operating system. For example, swap space is usually best put in its own
partition for Linux instead of in the main Linux partition for reasons of speed
(see below).
The disk is divided into three primary partitions, the second of which is
divided into two logical partitions. Part of the disk is not partitioned at all.
The disk as a whole and each primary partition has a boot sector.
The partition tables (the one in the MBR, and the ones for extended
partitions) contain one byte per partition that identifies the type of that
partition. This attempts to identify the operating system that uses the
partition, or what it uses it for. The purpose is to make it possible to avoid
having two operating systems accidentally using the same partition.
However, in reality, operating systems do not really care about the partition
type byte; e.g., Linux doesn't care at all what it is. Worse, some of them use
it incorrectly; e.g., at least some versions of DR−DOS ignore the most
significant bit of the byte, while others don't.
Note that even though it is called a filesystem, no part of the proc filesystem
touches any disk. It exists only in the kernel's imagination. Whenever anyone
tries to look at any part of the proc filesystem, the kernel makes it look as if
the part existed somewhere, even though it doesn't. So, even though there is
a multi−megabyte /proc/kcore file, it doesn't take any disk space.
There are many programs for creating and removing partitions. Most
operating systems have their own, and it can be a good idea to use each
operating system's own, just in case it does something unusual that the
others can't. Many of the programs are called fdisk, including the Linux one,
or variations thereof. Details on using the Linux fdisk given on its man page.
The cfdisk command is similar to fdisk, but has a nicer (full screen) user
interface.
When using IDE disks, the boot partition (the partition with the bootable
kernel image files) must be completely within the first 1024 cylinders. This is
because the disk is used via the BIOS during boot (before the system goes
into protected mode), and BIOS can't handle more than 1024 cylinders. It is
sometimes possible to use a boot partition that is only partly within the first
1024 cylinders. This works as long as all the files that are read with the BIOS
are within the first 1024 cylinders. Since this is difficult to arrange, it is a very
bad idea to do it; you never know when a kernel update or disk
defragmentation will result in an unbootable system. Therefore, make sure
your boot partition is completely within the first 1024 cylinders.
However, this may no longer be true with newer versions of LILO that support
LBA (Logical Block Addressing). Consult the documentation for your
distribution to see if it has a version of LILO where LBA is supported.
Some newer versions of the BIOS and IDE disks can, in fact, handle disks
with more than 1024 cylinders. If you have such a system, you can forget
about the problem; if you aren't quite sure of it, put it within the first 1024
cylinders.
Each partition should have an even number of sectors, since the Linux
filesystems use a 1 kilobyte block size, i.e., two sectors. An odd number of
sectors will result in the last sector being unused. This won't result in any
problems, but it is ugly, and some versions of fdisk will warn about it.
Magic'' also has a similar facility but with a nicer interface. Please do
remember that partitioning is dangerous. Make sure you have a recent
backup of any important data before you try changing partition sizes ``on
the fly''. The program parted can resize other types of partitions as well as
MS−DOS, but sometimes in a limited manner. Consult the parted
documentation before using it, better safe than sorry.
Each partition and extended partition has its own device file. The naming
convention for these files is that a partition's number is appended after the
name of the whole disk, with the convention that 1−4 are primary partitions
(regardless of how many primary partitions there are) and number greater
than 5 are logical partitions (regardless of within which primary partition they
reside). For example, /dev/hda1 is the first primary partition on the first IDE
hard disk, and /dev/sdb7 is the third extended partition on the second SCSI
hard disk.
5.10. Filesystems
Most UNIX filesystem types have a similar general structure, although the
exact details vary quite a bit. The central concepts are superblock, inode ,
data block, directory block , and indirection block. The superblock contains
information about the filesystem as a whole, such as its size (the exact
information here depends on the filesystem). An inode contains all
information about a file, except its name. The name is stored in the directory,
together with the number of the inode. A directory entry consists of a
filename and the number of the inode which represents the file. The inode
contains the numbers of several data blocks, which are used to store the
data in the file. There is space only for a few data block numbers in the
inode, however, and if more are needed, more space for pointers to the data
blocks is allocated dynamically. These dynamically allocated blocks are
indirect blocks; the name indicates that in order to find the data block, one
has to find its number in the indirect block first.
UNIX filesystems usually allow one to create a hole in a file (this is done with
the lseek() system call; check the manual page), which means that the
filesystem just pretends that at a particular place in the file there is just zero
bytes, but no actual disk sectors are reserved for that place in the file (this
means that the file will use a bit less disk space). This happens especially
often for small binaries, Linux shared libraries, some databases, and a few
other special cases. (Holes are implemented by storing a special value as the
address of the data block in the indirect block or inode. This special address
means that no data block is allocated for that part of the file, ergo, there is a
hole in the file.)
minix
The oldest, presumed to be the most reliable, but quite limited in features
(some time stamps are missing, at most 30 character filenames) and
restricted in capabilities (at most 64 MB per filesystem).
xia
A modified version of the minix filesystem that lifts the limits on the
filenames and filesystem sizes, but does not otherwise introduce new
features. It is not very popular, but is reported to work very well.
ext3
The ext3 filesystem has all the features of the ext2 filesystem. The difference
is, journaling has been added. This improves performance and recovery time
in case of a system crash. This has become more popular than ext2.
ext2
The most featureful of the native Linux filesystems. It is designed to be easily
upwards compatible, so that new versions of the filesystem code do not
require re−making the existing filesystems.
ext
A more robust filesystem. Journaling is used which makes data loss less likely.
Journaling is a mechanism whereby a record is kept of transaction which are
to be performed, or which have been performed. This allows the filesystem
to reconstruct itself fairly easily after damage caused by, for example,
improper shutdowns.
jfs
msdos
Compatibility with MS−DOS (and OS/2 and Windows NT) FAT filesystems.
umsdos
Extends the msdos filesystem driver under Linux to get long filenames,
owners, permissions, links, and device files. This allows a normal msdos
filesystem to be used as if it were a Linux one, thus removing the need for a
separate partition for Linux. vfat
iso9660
The standard CD−ROM filesystem; the popular Rock Ridge extension to the
CD−ROM standard that allows longer file names is supported automatically.
nfs
smbfs
hpfs
sysv
NTFS
The most advanced Microsoft journaled filesystem providing faster file access
and stability over previous Microsoft filesystems.
filesystems, see Section 5.10.6. You can also read the Filesystems HOWTO
located at http://www.tldp.org/HOWTO/Filesystems−HOWTO.html
There is also the proc filesystem, usually accessible as the /proc directory,
which is not really a filesystem at all, even though it looks like one. The proc
filesystem makes it easy to access certain kernel data structures, such as the
process list (hence the name). It makes these data structures look like a
filesystem, and that filesystem can be manipulated with all the usual file
tools.
See Section 5.10.6 for more details about the features of the different
filesystem types.
Filesystems are created, i.e., initialized, with the mkfs command. There is
actually a separate program for each filesystem type. mkfs is just a front end
that runs the appropriate program depending on the desired filesystem type.
The type is selected with the −t fstype option.
The programs called by mkfs have slightly different command line interfaces.
The common and most important options are summarized below; see the
manual pages for more.−t fstype Select the type of the filesystem−c Search
for bad blocks and initialize the bad block list accordingly.
−l filename Read the initial bad block list from the name file.
There are also many programs written to add specific options when creating
a specific filesystem. For example mkfs.ext3 adds a −b option to allow the
administrator to specify what block size should be used. Be sure to find out if
there is a specific program available for the filesystem type you want to use.
For more information on determining what block size to use please see
Section 5.10.5.
First, the floppy was formatted (the −n option prevents validation, i.e., bad
block checking). Then bad blocks were searched with badblocks, with the
output redirected to a file, bad−blocks. Finally, the filesystem was created,
with the bad block list initialized by whatever badblocks found.
The −c option could have been used with mkfs instead of badblocks and a
separate file. The −c option is more convenient than a separate use of
badblocks, but badblocks is necessary for checking after the filesystem has
been created.
The block size specifies size that the filesystem will use to read and write
data. Larger block sizes will help improve disk I/O performance when using
large files, such as databases. This happens because the disk can read or
write data for a longer period of time before having to search for the next
block.
On the downside, if you are going to have a lot of smaller files on that
filesystem, like the /etc, there the potential for a lot of wasted disk space.
For example, if you set your block size to 4096, or 4K, and you create a file
that is 256 bytes in size, it will still consume 4K of space on your harddrive.
For one file that may seem trivial, but when your filesystem contains
hundreds or thousands of files, this can add up.
Block size can also effect the maximum supported file size on some
filesystems. This is because many modern filesystem are limited not by block
size or file size, but by the number of blocks. Therefore you would be using a
"block size * max # of blocks = max block size" formula.
Before one can use a filesystem, it has to be mounted. The operating system
then does various bookkeeping things to make sure that everything works.
Since all files in UNIX are in a single directory tree, the mount operation will
make it look like the contents of the new filesystem are the contents of an
existing subdirectory in some already mounted.
The mount command takes two arguments. The first one is the device file
corresponding to the disk or partition containing the filesystem. The second
one is the directory below which it will be mounted. After these commands
the contents of the two filesystems look just like the contents of the /home
and /usr directories, respectively. One would then say that /dev/hda2 is
mounted on /home'', and similarly for /usr. To look at either filesystem, one
would look at the contents of the directory on which it has been mounted,
just as if it were any other directory. Note the difference between the device
file, /dev/hda2, and the mounted−on directory, /home. The device file gives
access to the raw contents of the disk, the mounted−on directory gives
access to the files on the disk. The mounted−on directory is called the mount
point.
Linux supports many filesystem types. mount tries to guess the type of the
filesystem. You can also use the −t fstype option to specify the type directly;
this is sometimes necessary, since the heuristics mount uses do not always
work.
The mounted−on directory need not be empty, although it must exist. Any
files in it, however, will be inaccessible by name while the filesystem is
mounted. (Any files that have already been opened will still be accessible.
Files that have hard links from other directories can be accessed using those
names.) There is no harm done with this, and it can even be useful. For
instance, some people like to have /tmp and /var/tmp synonymous, and
make /tmp be a symbolic link to /var/tmp. When the system is booted, before
the /var filesystem is mounted, a /var/tmp directory residing on the root
filesystem is used instead. When /var is mounted, it will make the /var/tmp
directory on the root filesystem inaccessible. If /var/tmp didn't exist on the
root filesystem, it would be impossible to use temporary files before
mounting /var.
If you don't intend to write anything to the filesystem, use the −r switch for
mount to do a read−only mount. This will make the kernel stop any attempts
at writing to the filesystem, and will also stop the kernel from updating file
access times in the inodes. Read−only mounts are necessary for unwritable
media, e.g.,CD−ROMs.
The alert reader has already noticed a slight logistical problem. How is the
first filesystem (called the root filesystem, because it contains the root
directory) mounted, since it obviously can't be mounted on another
filesystem? Well, the answer is that it is done by magic. The root filesystem is
magically mounted at boot time, and one can rely on it to always be
mounted. If the root filesystem can't be mounted, the system does not boot.
The name of the filesystem that is magically mounted as root is either
compiled into the kernel, or set using LILO or rdev. For more information, see
the kernel source or the Kernel Hackers' Guide.
The root filesystem is usually first mounted read−only. The startup scripts
will then run fsck to verify its validity, and if there are no problems, they will
re−mount it so that writes will also be allowed. fsck must not be run on a
mounted filesystem, since any changes to the filesystem while fsck is
running will cause trouble. Since the root filesystem is mounted read−only
while it is being checked, fsck can fix any problems without worry, since the
remount operation will flush any metadata that the filesystem keeps in
memory.
On many systems there are other filesystems that should also be mounted
automatically at boot time. These are specified in the /etc/fstab file; see the
fstab man page for details on the format. The details of exactly when the
extra filesystems are mounted depend on many factors, and can be
configured by each administrator if need be; see Chapter 8.
Mounting and unmounting requires super user privileges, i.e., only root can
do it. The reason for this is that if any user can mount a floppy on any
directory, then it is rather easy to create a floppy with, say, a Trojan horse
disguised as /bin/sh, or any other often used program. However, it is often
necessary to allow users to use floppies, and there are several ways to do
this:
Give the users the root password. This is obviously bad security, but is the
easiest solution. It works well if there is no need for security anyway, which is
the case on many non−networked, personal systems.
Use a program such as sudo to allow users to use mount. This is still bad
security, but doesn't directly give super user privileges to everyone. It
requires several seconds of hard thinking on the users' behalf. Furthermore
sudo can be configured to only allow users to execute certain commands.
See the sudo(8), sudoers(5), and visudo(8) manual pages.
Make the users use mtools, a package for manipulating MS−DOS filesystems,
without mounting them. This works well if MS−DOS floppies are all that is
needed, but is rather awkward otherwise. • List the floppy devices and their
allowable mount points together with the suitable options in /etc/fstab. The
columns are: device file to mount, directory to mount on, filesystem type,
options, backup frequency (used by dump), and fsck pass number (to specify
the order in which filesystems should be checked upon boot; 0 means no
check).
The noauto option stops this mount to be done automatically when the
system is started (i.e., it stops mount −a from mounting it). The user option
allows any user to mount the filesystem, and, because of security reasons,
disallows execution of programs (normal or setuid) and interpretation of
device files from the mounted filesystem. After this, any user can mount a
floppy with an msdos filesystem. The floppy can (and needs to, of course) be
unmounted with the corresponding umount command.
If you want to provide access to several types of floppies, you need to give
several mount points. The settings can be different for each mount point.
The "auto" option in the filesystem type column allows the mount command
to query the filesystem and try to determine what type it is itself. This option
won't work on all filesystem types, but works fine on the more common ones.
For MS−DOS filesystems (not just floppies), you probably want to restrict
access to it by using the uid, gid, and umask filesystem options, described in
detail on the mount manual page. If you aren't careful, mounting an
MS−DOS filesystem gives everyone at least read access to the files in it,
which is not a good idea.
To be added
This section will describe mount options and how to use them in /etc/fstab to
provide additional system security.
Most systems are setup to run fsck automatically at boot time, so that any
errors are detected (and hopefully corrected) before the system is used. Use
of a corrupted filesystem tends to make things worse: if the data structures
are messed up, using the filesystem will probably mess them up even more,
resulting in more data loss. However, fsck can take a while to run on big
filesystems, and since errors almost never occur if the system has been shut
down properly, a couple of tricks are used to avoid doing the checks in such
cases. The first is that if the file /etc/fastboot exists, no checks are made. The
second is that the ext2 filesystem has a special marker in its superblock that
tells whether the filesystem was unmounted properly after the previous
mount. This allows e2fsck (the version of fsck for the ext2 filesystem) to
avoid checking the filesystem if the flag indicates that the unmount was
done (the assumption being that a proper unmount indicates no problems).
Whether the /etc/fastboot trick works on your system depends on your
startup scripts, but the ext2 trick works every time you use e2fsck. It has to
be explicitly bypassed with an option to e2fsck to be avoided. (See the
e2fsck man page for details on how.)
The automatic checking only works for the filesystems that are mounted
automatically at boot time. Use fsck manually to check other filesystems,
e.g., floppies.
It can be a good idea to periodically check for bad blocks. This is done with
the badblocks command. It outputs a list of the numbers of all bad blocks it
can find. This list can be fed to fsck to be recorded in the filesystem data
structures so that the operating system won't try to use the bad blocks for
storing data.
If badblocks reports a block that was already used, e2fsck will try to move
the block to another place. If the block was really bad, not just marginal, the
contents of the file may be corrupted.
In the earlier days of the ext2 filesystem, there was a concern over file
fragmentation that lead to the development of a defragmentation program
called, defrag. A copy of it can still be downloaded at
http://www.go.dlr.de/linux/src/defrag−0.73.tar.gz. However, it is HIGHLY
recommended that you NOT use it. It was designed for and older version of
ext2, and has not bee updated since 1998! I only mention it here for
references purposes.
There are many MS−DOS defragmentation programs that move blocks
around in the filesystem to remove fragmentation. For other filesystems,
defragmentation must be done by backing up the filesystem, re−creating it,
and restoring the files from backups. Backing up a filesystem before
defragmenting is a good idea for all filesystems, since many things can go
wrong during the defragmentation.
Some other tools are also useful for managing filesystems. df shows the free
disk space on one or more filesystems; du shows how much disk space a
directory and all its files contain. These can be used to hunt down disk space
wasters. Both have manual pages which detail the (many) options which can
be used.
sync forces all unwritten blocks in the buffer cache (see Section 6.6) to be
written to disk. It is seldom necessary to do this by hand; the daemon
process update does this automatically. It can be useful in catastrophes, for
example if update or its helper process bdflush dies, or if you must turn off
power now and can't wait for update to run. Again, there are manual pages.
The man is your very best friend in Linux. Its cousin apropos is also very
useful when you don't know what the name of the command you want is.
A maximal mount count. e2fsck enforces a check when filesystem has been
mounted too many times, even if the clean flag is set. For a system that is
used for developing or testing the system, it might be a good idea to reduce
this limit.
A maximal time between checks. e2fsck can also enforce a maximal time
between two checks, even if the clean flag is set, and the filesystem hasn't
been mounted very often. This can be disabled, however.
Number of blocks reserved for root. Ext2 reserves some blocks for root so
that if the filesystem fills up, it is still possible to do system administration
without having to delete anything. The reserved amount is by default 5
percent, which on most disks isn't enough to be wasteful. However, for
floppies there is no point in reserving any blocks.See the tune2fs manual
page for more information. dumpe2fs shows information about an ext2 or
ext3 filesystem, mostly from the superblock. Below is a sample output. Some
of the information in the output is technical and requires understanding of
how the filesystem works, but much of it is readily understandable even for
lay-admins.
dump and restore can be used to back up an ext2 filesystem. They are ext2
specific versions of the traditional UNIX backup tools. See Section 12.1 for
more information on backups.
Not all disks or partitions are used as filesystems. A swap partition, for
example, will not have a filesystem on it. Many floppies are used in a
tape−drive emulating fashion, so that a tar (tape archive) or other file is
written directly on the raw disk, without a filesystem. Linux boot floppies
don't contain a filesystem, only the raw kernel.
Avoiding a filesystem has the advantage of making more of the disk usable,
since a filesystem always has some bookkeeping overhead. It also makes the
disks more easily compatible with other systems: for example, the tar file
format is the same on all systems, while filesystems are different on most
systems. You will quickly get used to disks without filesystems if you need
them. Bootable Linux floppies also do not necessarily have a filesystem,
although they may.
One reason to use raw disks is to make image copies of them. For instance, if
the disk contains a partially damaged filesystem, it is a good idea to make an
exact copy of it before trying to fix it, since then you can start again if your
fixing breaks things even more. The first dd makes an exact image of the
floppy to the file floppy−image, the second one writes the image to the
floppy. (The user has presumably switched the floppy before the second
command. Otherwise the command pair is of doubtful usefulness.)
5.12. Allocating disk space
For a simple workstation with limited disk space, such as a laptop, you may
have as few a 3 partitions. A partition for /, /boot, and swap. However, for
most users this is not a recommended solution.
When creating your partitioning scheme, there are some things you need to
remember. You cannot create separate partitions for the following directories:
/bin, /etc, /dev, /initrd, /lib, and /sbin. The contents of these directories are
required at bootup and must always be part of the / partition.
It is also recommended that you create separate partitions for /var and /tmp.
This is because both directories typically have data that is constantly
changing. Not creating separate partitions for these filesystems puts you at
risk of having log file fill up our / partition. The problem with having many
partitions is that it splits the total amount of free disk space into many small
pieces. One way to avoid this problem is to use to create Logical Volumes.
Using LVM allows administrators the flexibility to create logical disks that can
be expanded dynamically as more disk space is required.
This is done first by creating partitions with as an 0x8e Linux LVM partition
type. Then the Physical Partitions are added to a Volume Group and broken
up into chunks, or Physical Extents Volume Group. These extends can then
be grouped into Logical Volumes. These Logical Volumes then can be
formatted just like a physical partition. The big difference is that they can be
expanded by adding more extents to them.
Right now, a full discussion of LVM is beyond the scope of this guide.
However, and excellent resource for learning more about LVM can be found
at http://www.tldp.org/HOWTO/LVM−HOWTO.html.
The Linux distribution you install will give some indication of how much disk
space you need for various configurations. Programs installed separately
may also do the same. This will help you plan your disk space usage, but you
should prepare for the future and reserve some extra space for things you
will notice later that you need.
The amount you need for user files depends on what your users wish to do.
Most people seem to need as much space for their files as possible, but the
amount they will live happily with varies a lot. Some people do only light text
processing and will survive nicely with a few megabytes, others do heavy
image processing and will need gigabytes.
By the way, when comparing file sizes given in kilobytes or megabytes and
disk space given in megabytes, it can be important to know that the two
units can be different. Some disk manufacturers like to pretend that a
kilobyte is 1000 bytes and a megabyte is 1000 kilobytes, while all the rest of
the computing world uses 1024 for both factors. Therefore, a 345 MB hard
disk is really a 330 MB hard disk.
First, I created a /boot partition at 128 MG. This is larger than I will need, and
big enough to give me space if I need it. I created a separate /boot partition
to ensure that this filesystem will never get filled up, and therefore will be
bootable. Then I created a 5 GB /var partition. Since the /var filesystem is
where log files and email is stored I wanted to isolate it from my root
partition. (I have had log files grow overnight and fill my root filesystem in
the past.) Next, I created a 15 GB /home partition. This is handy in the event
of a system crash. If I ever have to re−install Linux from scratch, I can tell
the installation program to not format this partition, and instead remount it
without the data being lost. Finally since I had 512 MG of RAM I created a
1024 MG (or 1 GB) swap partition. This left me with roughly a 9 GB root
filesystem. I using my old 10 GB hard drive, I created an 8 GB /usr partition
and left 2 GB unused. This is incase I need more space in the future.In the
end, my partition tables looked like this:
Adding more disk space for Linux is easy, at least after the hardware has
been properly installed (the hardware installation is outside the scope of this
book). You format it if necessary, then create the partitions and filesystem as
described above, and add the proper lines to /etc/fstab so that it is mounted
automatically.
The best tip for saving disk space is to avoid installing unnecessary
programs. Most Linux distributions have an option to install only part of the
packages they contain, and by analyzing your needs you might notice that
you don't need most of them. This will help save a lot of disk space, since
many programs are quite large. Even if you do need a particular package or
program, you might not need all of it. For example, some on−line
documentation might be unnecessary, as might some of the Elisp files for
GNU Emacs, some of the fonts for X11, or some of the libraries for
programming.
Another way to save space is to take special care when formatting you
partitions. Most modern filesystems will allow you to specify the block size.
The block size is chunk size that the filesystem will use to read and write
data. Larger block sizes will help disk I/O performance when using large files,
such as databases. This happens because the disk can read or write data for
a longer period of time before having to search for the next block.
"Minnet, jag har tappat mitt minne, är jag svensk eller finne, kommer inte
ihåg..." (Bosse Österberg)
A Swedish drinking song, (rough) translation: ``Memory, I have lost my
memory. Am I Swedish or Finnish? I can't remember''
This section describes the Linux memory management features, i.e., virtual
memory and the disk buffer cache. The purpose and workings and the things
the system administrator needs to take into consideration are described.
Linux supports virtual memory, that is, using a disk as an extension of RAM
so that the effective size of usable memory grows correspondingly. The
kernel will write the contents of a currently unused block of memory to the
hard disk so that the memory can be used for another purpose. When the
original contents are needed again, they are read back into memory. This is
all made completely transparent to the user; programs running under Linux
only see the larger amount of memory available and don't notice that parts
of them reside on the disk from time to time. Of course, reading and writing
the hard disk is slower (on the order of a thousand times slower) than using
real memory, so the programs don't run as fast. The part of the hard disk
that is used as virtual memory is called the swap space.
Linux can use either a normal file in the filesystem or a separate partition for
swap space. A swap partition is faster, but it is easier to change the size of a
swap file (there's no need to repartition the whole hard disk, and possibly
install everything from scratch). When you know how much swap space you
need, you should go for a swap partition, but if you are uncertain, you can
use a swap file first, use the system for a while so that you can get a feel for
how much swap you need, and then make a swap partition when you're
confident about its size.
You should also know that Linux allows one to use several swap partitions
and/or swap files at the same time. This means that if you only occasionally
need an unusual amount of swap space, you can set up an extra swap file at
such times, instead of keeping the whole amount allocated all the time.
The bit about holes is important. The swap file reserves the disk space so
that the kernel can quickly swap out a page without having to go through all
the things that are necessary when allocating a disk sector to a file.
The kernel merely uses any sectors that have already been allocated to the
file. Because a hole in a file means that there are no disk sectors allocated
(for that place in the file), it is not good for the kernel to try to use them.
One good way to create the swap file without holes is through the following
command.
where /extra−swap is the name of the swap file and the size of is given after
the count=. It is best for the size to be a multiple of 4, because the kernel
writes out memory pages, which are 4 kilobytes in size. If the size is not a
multiple of 4, the last couple of kilobytes may be unused.
A swap partition is also not special in any way. You create it just like any
other partition; the only difference is that it is used as a raw partition, that is,
it will not contain any filesystem at all. It is a good idea to mark swap
partitions as type 82 (Linux swap); this will the make partition listings
clearer, even though it is not strictly necessary to the kernel.
After you have created a swap file or a swap partition, you need to write a
signature to its beginning; this contains some administrative information and
is used by the kernel.
Note that the swap space is still not in use yet: it exists, but the kernel does
not use it to provide virtual memory.
You should be very careful when using mkswap, since it does not check that
the file or partition isn't used for anything else. You can easily overwrite
important files and partitions with mkswap! Fortunately, you should only
need to use mkswap when you install your system.
The Linux memory manager limits the size of each swap space to 2 GB. You
can, however, use up to 8 swap spaces simultaneously, for a total of 16GB.
The first line of output (Mem:) shows the physical memory. The total column
does not show the physical memory used by the kernel, which is usually
about a megabyte. The used column shows the amount of memory used (the
second line does not count buffers). The free column shows completely
unused memory. The shared column shows the amount of memory shared by
several processes; the more, the merrier. The buffers column shows the
current size of the disk buffer cache.
That last line (Swap:) shows similar information for the swap spaces. If this
line is all zeroes, your swap space is not activated.
The same information is available via top, or using the proc filesystem in
file /proc/meminfo. It is currently difficult to get information on the use of a
specific swap space.
A swap space can be removed from use with swapoff. It is usually not
necessary to do it, except for temporary swap spaces. Any pages in use in
the swap space are swapped in first; if there is not sufficient physical
memory to hold them, they will then be swapped out (to some other swap
space). If there is not enough virtual memory to hold all of the pages Linux
will start to thrash; after a long while it should recover, but meanwhile the
system is unusable. You should check (e.g., with free) that there is enough
free memory before removing a swap space from use.
All the swap spaces that are used automatically with swapon −a can be
removed from use with swapoff −a; it looks at the file /etc/fstab to find what
to remove. Any manually used swap spaces will remain in use.
Sometimes a lot of swap space can be in use even though there is a lot of
free physical memory. This can happen for instance if at one point there is
need to swap, but later a big process that occupied much of the physical
memory terminates and frees the memory. The swapped−out data is not
automatically swapped in until it is needed, so the physical memory may
remain free for a long time. There is no need to worry about this, but it can
be comforting to know what is happening.
Virtual memory is built into many operating systems. Since they each need it
only when they are running, i.e., never at the same time, the swap spaces of
all but the currently running one are being wasted. It would be more efficient
for them to share a single swap space. This is possible, but can require a bit
of hacking. The Tips−HOWTO at
http://www.tldp.org/HOWTO/Tips−HOWTO.html, which contains some advice
on how to implement this.
Some people will tell you that you should allocate twice as much swap space
as you have physical memory, but this is a bogus rule. Here's how to do it
properly:
Estimate your total memory needs. This is the largest amount of memory
you'll probably need at a time, that is the sum of the memory requirements
of all the programs you want to run at the same time. This can be done by
running at the same time all the programs you are likely to ever be running
at the same time.
For instance, if you want to run X, you should allocate about 8 MB for it, gcc
wants several megabytes (some files need an unusually large amount, up to
tens of megabytes, but usually about four should do), and so on. The kernel
will use about a megabyte by itself, and the usual shells and other small
utilities perhaps a few hundred kilobytes (say a megabyte together). There is
no need to try to be exact, rough estimates are fine, but you might want to
be on the pessimistic side.
Remember that if there are going to be several people using the system at
the same time, they are all going to consume memory. However, if two
people run the same program at the same time, the total memory
consumption is usually not double, since code pages and shared libraries
exist only once.
The free and ps commands are useful for estimating the memory needs.
Based on the computations above, you know how much memory you'll be
needing in total. So, in order to allocate swap space, you just need to
subtract the size of your physical memory from the total memory needed,
and you know how much swap space you need. (On some versions of UNIX,
you need to allocate space for an image of the physical memory as well, so
the amount computed in step 2 is what you need and you shouldn't do the
subtraction.)
If your calculated swap space is very much larger than your physical memory
(more than a couple times larger), you should probably invest in more
physical memory, otherwise performance will be too low.
It's a good idea to have at least some swap space, even if your calculations
indicate that you need none. Linux uses swap space somewhat aggressively,
so that as much physical memory as possible can be kept free. Linux will
swap out memory pages that have not been used, even if the memory is not
yet needed for anything. This avoids waiting for swapping when it is needed:
the swapping can be done earlier, when the disk is otherwise idle.
Swap space can be divided among several disks. This can sometimes
improve performance, depending on the relative speeds of the disks and the
access patterns of the disks. You might want to experiment with a few
schemes, but be aware that doing the experiments properly is quite difficult.
You should not believe claims that any one scheme is superior to any other,
since it won't always be true.
Disk buffering works for writes as well. On the one hand, data that is written
is often soon read again (e.g., a source code file is saved to a file, then read
by the compiler), so putting data that is written in the cache is a good idea.
On the other hand, by only putting the data into the cache, not writing it to
disk at once, the program that writes runs quicker. The writes can then be
done in the background, without slowing down the other programs.
Most operating systems have buffer caches (although they might be called
something else), but not all of them work according to the above principles.
Some are write−through: the data is written to disk at once (it is kept in the
cache as well, of course). The cache is called write−back if the writes are
done at a later time. Write−back is more efficient than write−through, but
also a bit more prone to errors: if the machine crashes, or the power is cut at
a bad moment, or the floppy is removed from the disk drive before the data
in the cache waiting to be written gets written, the changes in the cache are
usually lost. This might even mean that the filesystem (if there is one) is not
in full working order, perhaps because the unwritten data held important
changes to the bookkeeping information.
Because of this, you should never turn off the power without using a proper
shutdown procedure or remove a floppy from the disk drive until it has been
unmounted (if it was mounted) or after whatever program is using it has
signaled that it is finished and the floppy drive light doesn't shine anymore.
The sync command flushes the buffer, i.e., forces all unwritten data to be
written to disk, and can be used when one wants to be sure that everything
is safely written. In traditional UNIX systems, there is a program called
update running in the background which does a sync every 30 seconds, so it
is usually not necessary to use sync. Linux has an additional daemon,
bdflush, which does a more imperfect sync more frequently to avoid the
sudden freeze due to heavy disk I/O that sync sometimes causes.
If the cache is of a fixed size, it is not very good to have it too big, either,
because that might make the free memory too small and cause swapping
(which is also slow). To make the most efficient use of real memory, Linux
automatically uses all free RAM for buffer cache, but also automatically
makes the cache smaller when programs need more memory.
Under Linux, you do not need to do anything to make use of the cache, it
happens completely automatically. Except for following the proper
procedures for shutdown and removing floppies, you do not need to worry
about it.
When a performance issue arises, there are 4 main areas to consider: CPU,
Memory, Disk I/O, and Network. The ability to determine where the
bottleneck is can save you a lot of time.
The most common of these commands is top. The top will display a
continually updating report of system resource usage. The top portion of the
report lists information such as the system time, uptime, CPU usage, physical
ans swap memory usage, and number of processes. Below that is a list of the
processes sorted by CPU utilization.
You can modify the output of top while is is running. If you hit an i, top will no
longer display idle processes. Hit i again to see them again. Hitting M will
sort by memory usage, S will sort by how long they processes have been
running, and P will sort by CPU usage again.
In addition to viewing options, you can also modify processes from within the
top command. You can use u to view processes owned by a specific user, k to
kill processes, and r to renice them.
For more in−depth information about processes you can look in the /proc
filesystem. In the /proc filesystem you will find a series of sub−directories
with numeric names. These directories are associated with the processes ids
of currently running processes. In each directory you will find a series of files
containing information about the process.
The iostat will display the current CPU load average and disk I/O information.
This is a great command to monitor . For 2.4 kernels the devices is names
using the device's major and minor number. In this case the device listed
is /dev/hda. To have iostat print this out for you, use the −x. The iostat man
page contains a detailed explanation of what each of these columns mean.
The ps will provide you a list of processes currently running. There is a wide
variety of options that this command gives you.
A common use would be to list all processes currently running. To do this you
would use the ps −ef command. (Screen output from this command is too
large to include, the following is only a partial output.)
The first column shows who owns the process. The second column is the
process ID. The Third column is the parent process ID. This is the process
that generated, or started, the process. The forth column is the CPU usage
(in percent). The fifth column is the start time, of date if the process has
been running long enough. The sixth column is the tty associated with the
process, if applicable. The seventh column is the cumulitive CPU usage (total
amount of CPU time is has used while running). The eighth column is the
command itself.
With this information you can see exacly what is running on your system and
kill run−away processes, or those that are causing problems.
The vmstat command will provide a report showing statistics for system
processes, memory, swap, I/O, and the CPU. These statistics are generated
using data from the last time the command was run to the present. In the
case of the command never being run, the data will be from the last reboot
until the present
The lsof command will print out a list of every file that is in use. Since Linux
considers everythihng a file, this list can be very long. However, this
command can be useful in diagnosing problems. An example of this is if you
wish to unmount a filesystem, but you are being told that it is in use. You
could use this command and grep for the name of the filesystem to see who
is using it. Or suppose you want to see all files in use by a particular process.
To do this you would use lsof −p −processid−.
To learn more about what command line tools are available, Chris Karakas
has wrote a reference guide titled GNU/Linux Command−Line Tools
Summary. It's a good resource for learning what tools are out there and how
to do a number of tasks.
Many reports are currently talking about how cheap storage has gotten, but
if you're like most of us it isn't cheap enough. Most of us have a limited
amount of space, and need to be able to monitor it and control how it's used.
The df is the simplest tool available to view disk usage. Simply type in df and
you'll be shown disk usage for all your mounted filesystems in 1K blocks. You
can also use the −h to see the output in "human−readable" format. This will
be in K, Megs, or Gigs depending on the size of the filesystem. Alternately,
you can also use the −B to specify block size.
In addition to space usage, you could use the −i option to view the number
of used and available inodes.
Now that you know how much space has been used on a filesystem how can
you find out where that data is? To view usage by a directory or file you can
use du. Unless you specify a filename du will act recursively.
7.2.3. Quotas
For more information about quotas you can read The Quota .
Just because you're paranoid doesn't mean they AREN'T out to get you.From
time to time there are going to be occasions where you will want to know
exactly what people are doing on your system. Maybe you notice that a lot of
RAM is being used, or a lot of CPU activity. You are going to want to see who
is on the system, what they are running, and what kind of resources they are
using.
The easiest way to see who is on the system is to do a who or w. The −−>
who is a simple tool that lists out who is logged −−> on the system and
what port or terminal they are logged on at. 7.3.2. The ps command −again!
In the previous section we can see that user aweeks is logged onto both
pts/1 and pts/2, but what if we want to see what they are doing? We could to
a ps −u aweeks and get the following output. From this we can see that the
user is doing a ps ssh. This is a much more consolidated use of the ps than
discussed previously.
Even easier than using the who and ps −u commands is to use the w. w will
print out not only who is on the system, but also the comma. From this we
can see that I have a kde session running, I'm working in this document and
have another terminal open sitting idle at a bash prompt.
To Be Added
This section explains what goes on when a Linux system is brought up and
taken down, and how it should be done properly. If proper procedures are not
followed, files might be corrupted or lost.
The act of turning on a computer system and causing its operating system to
be loaded is called booting. The name comes from an image of the computer
pulling itself up from its bootstraps, but the act itself slightly more realistic.
During bootstrapping, the computer first loads a small piece of code called
the bootstrap loader, which in turn loads and starts the operating system.
The bootstrap loader is usually stored in a fixed location on a hard disk or a
floppy. The reason for this two step process is that the operating system is
big and complicated, but the first piece of code that the computer loads
must be very small (a few hundred bytes), to avoid making the firmware
unnecessarily complicated.
After Linux has been loaded, it initializes the hardware and device drivers,
and then runs init. init starts other processes to allow users to log in, and do
things. The details of this part will be discussed below.
In order to shut down a Linux system, first all processes are told to terminate
(this makes them close any files and do other necessary things to keep
things tidy), then filesystems and swap areas are unmounted, and finally a
message is printed to the console that the power can be turned off. If the
proper procedure is not followed, terrible things can and will happen; most
importantly, the filesystem buffer cache might not be flushed, which means
that all data in it is lost and the filesystem on disk is inconsistent, and
therefore possibly unusable.
When a PC is booted, the BIOS will do various tests to check that everything
looks all right, and will then start the actual booting. This process is called
the power on self test , or POST for short. It will choose a disk drive (typically
the first floppy drive, if there is a floppy inserted, otherwise the first hard
disk, if one is installed in the computer; the order might be configurable,
however) and will then read its very first sector. This is called the boot
sector; for a hard disk, it is also called the master boot record, since a hard
disk can contain several partitions, each with their own boot sectors.
The boot sector contains a small program (small enough to fit into one
sector) whose responsibility is to read the actual operating system from the
disk and start it. When booting Linux from a floppy disk, the boot sector
contains code that just reads the first few hundred blocks (depending on the
actual kernel size, of course) to a predetermined place in memory. On a Linux
boot floppy, there is no filesystem, the kernel is just stored in consecutive
sectors, since this simplifies the boot process. It is possible, however, to boot
from a floppy with a filesystem, by using LILO, the LInux LOader, or GRUB,
the GRand Unifying Bootloader.
When booting from the hard disk, the code in the master boot record will
examine the partition table (also in the master boot record), identify the
active partition (the partition that is marked to be bootable), read the boot
sector from that partition, and then start the code in that boot sector. The
code in the partition's boot sector does what a floppy disk's boot sector does:
it will read in the kernel from the partition and start it. The details vary,
however, since it is generally not useful to have a separate partition for just
the kernel image, so the code in the partition's boot sector can't just read the
disk in sequential order, it has to find the sectors wherever the filesystem
has put them. There are several ways around this problem, but the most
common way is to use a boot loader like LILO or GRUB. (The details about
how to do this are irrelevant for this discussion, however; see the LILO or
GRUB documentation for more information; it is most thorough.)
When booting, the bootloader will normally go right ahead and read in and
boot the default kernel. It is also possible to configure the boot loader to be
able to boot one of several kernels, or even other operating systems than
Linux, and it is possible for the user to choose which kernel or operating
system is to be booted at boot time. LILO, for example, can be configured so
that if one holds down the alt, shift, or ctrl key at boot time (when LILO is
loaded), LILO will ask what is to be booted and not boot the default right
away. Alternatively, the bootloader can be configured so that it will always
ask, with an optional timeout that will cause the default kernel to be booted.
It is also possible to give a kernel command line argument, after the name of
the kernel or operating system. For a list of possible options you can read
http://www.tldp.org/HOWTO/BootPrompt−HOWTO.html.
Booting from floppy and from hard disk have both their advantages, but
generally booting from the hard disk is nicer, since it avoids the hassle of
playing around with floppies. It is also faster. Most Linux distributions will
setup the bootloader for you during the install process.
After the Linux kernel has been read into the memory, by whatever means,
and is started for real, roughly the following things happen:
The Linux kernel is installed compressed, so it will first uncompress itself. The
beginning of the kernel image contains a small program that does this.
If you have a super−VGA card that Linux recognizes and that has some
special text modes (such as 100 columns by 40 rows), Linux asks you which
mode you want to use. During the kernel compilation, it is possible to preset
a video mode, so that this is never asked. This can also be done with LILO,
GRUB or rdev.
After this, the kernel checks what other hardware there is (hard disks,
floppies, network adapters, etc), and configures some of its device drivers
appropriately; while it does this, it outputs messages about its findings. For
example, when I boot, I it looks like this The exact texts are different on
different systems, depending on the hardware, the version of Linux being
used, and how it has been configured.
Then the kernel will try to mount the root filesystem. The place is
configurable at compilation time, or any time with rdev or the bootloader.
The filesystem type is detected automatically. If the mounting of the root
filesystem fails, for example because you didn't remember to include the
corresponding filesystem driver in the kernel, the kernel panics and halts the
system (there isn't much it can do, anyway).The root filesystem is usually
mounted read−only (this can be set in the same way as the place). This
makes it possible to check the filesystem while it is mounted; it is not a good
idea to check a filesystem that is mounted read−write.
After this, the kernel starts the program init (located in /sbin/init) in the
background (this will always become process number 1). init does various
startup chores. The exact things it does depends on how it is configured; see
Section 2.3.1 for more information (not yet written). It will at least start some
essential background daemons.
init then switches to multi−user mode, and starts a getty for virtual consoles
and serial lines. getty is the program which lets people log in via virtual
consoles and serial terminals. init may also start some other programs,
depending on how it is configured.
After this, the boot is complete, and the system is up and running normally.
To be added
This section will give an overview of the difference between GRUB and
LILO.For more information on LILO, you can read
http://www.tldp.org/HOWTO/LILO.html For more information on GRUB, you
can visit http://www.gnu.org/software/grub/grub.html
It is important to follow the correct procedures when you shut down a Linux
system. If you fail do so, your filesystems probably will become trashed and
the files probably will become scrambled. This is because Linux has a disk
cache that won't write things to disk at once, but only at intervals. This
greatly improves performance but also means that if you just turn off the
power at a whim the cache may hold a lot of data and that what is on the
disk may not be a fully working filesystem (because only some things have
been written to the disk).
If you are running a system where you are the only user, the usual way of
using shutdown is to quit all running programs, log out on all virtual
consoles, log in as root on one of them (or stay logged in as root if you
already are, but you should change to root's home directory or the root
directory, to avoid problems with unmounting), then give the command
shutdown −h now (substitute now with a plus sign and a number in minutes
if you want a delay, though you usually don't on a single user system).
Alternatively, if your system has many users, use the command shutdown
−h +time message, where time is the time in minutes until the system is
halted, and message is a short explanation of what shuting down.
This will warn everybody that the system will shut down in ten minutes, and
that they'd better get lost or lose data. The warning is printed to every
terminal on which someone is logged in, including all.
We will install a new disk. System should be back on−line in three hours.
The system is going DOWN for system halt in 10 minutes. The warning is
automatically repeated a few times before the boot, with shorter and shorter
intervals as the time runs out.
When the real shutting down starts after any delays, all filesystems (except
the root one) are unmounted, user processes (if anybody is still logged in)
are killed, daemons are shut down, all filesystem are unmounted, and
generally everything settles down. When that is done, init prints out a
message that you can power down the machine. Then, and only then, should
you move your fingers towards the power switch.
In the old days, some people like to shut down using the command sync
three times, waiting for the disk I/O to stop, then turn off the power. If there
are no running programs, this is equivalent to using shutdown. However, it
does not unmount any filesystems and this can lead to problems with the
ext2fs ``clean filesystem'' flag. The triple−sync method is not recommended.
(In case you're wondering: the reason for three syncs is that in the early days
of UNIX, when the commands were typed separately, that usually gave
sufficient time for most disk I/O to be finished.)
8.4. Rebooting
Rebooting means booting the system again. This can be accomplished by
first shutting it down completely, turning power off, and then turning it back
on. A simpler way is to ask shutdown to reboot the system, instead of merely
halting it. This is accomplished by using the −r option to shutdown, for
example, by giving the command shutdown −r now.
The shutdown command can also be used to bring the system down to single
user mode, in which no one can log in, but root can use the console. This is
useful for system administration tasks that can't be done while the system is
running normally.
It is not always possible to boot a computer from the hard disk. For example,
if you make a mistake in configuring LILO, you might make your system
unbootable. For these situations, you need an alternative way of booting that
will always work (as long as the hardware works). For typical PCs, this means
booting from the floppy drive.
Most Linux distributions allow one to create an emergency boot floppy during
installation. It is a good idea to do this. However, some such boot disks
contain only the kernel, and assume you will be using the programs on the
distribution's installation disks to fix whatever problem you have. Sometimes
those programs aren't enough; for example, you might have to restore some
files from backups made with software not on the installation disks.
You can't use the floppy drive you use to mount the root floppy for anything
else. This can be inconvenient if you only have one floppy drive. However, if
you have enough memory, you can configure your boot floppy to load the
root disk to a ramdisk (the boot floppy's kernel needs to be specially
configured for this). Once the root floppy has been loaded into the ramdisk,
the floppy drive is free to mount other disks.
Chapter 9. init
This chapter describes the init process, which is the first user level process
started by the kernel. init has many important duties, such as starting getty
(so that users can log in), implementing run levels, and taking care of
orphaned processes. This chapter explains how init is configured and how
you can make use of the different run levels.
init is one of those programs that are absolutely essential to the operation of
a Linux system, but that you still can mostly ignore. A good Linux distribution
will come with a configuration for init that will work for most systems, and on
these systems there is nothing you need to do about init. Usually, you only
need to worry about init if you hook up serial terminals, dial−in (not
dial−out) modems, or if you want to change the default run level.
When the kernel has started itself (has been loaded into memory, has started
running, and has initialized all device drivers and data structures and such),
it finishes its own part of the boot process by starting a user level program,
init. Thus, init is always the first process (its process number is always 1).
The kernel looks for init in a few locations that have been historically used for
it, but the proper location for it (on a Linux system) is /sbin/init. If the kernel
can't find init, it tries to run /bin/sh, and if that also fails, the startup of the
system fails.
After the system is properly up, init restarts getty for each terminal after a
user has logged out (so that the next user can log in). init also adopts orphan
processes: when a process starts a child process and dies before its child, the
child immediately becomes a child of init. This is important for various
technical reasons, but it is good to know it, since it makes it easier to
understand process lists and process tree graphs. There are a few variants of
init available. Most Linux distributions use sysvinit (written by Miquel van
Smoorenburg), which is based on the System V init design. The BSD versions
of Unix have a different init. The primary difference is run levels: System V
has them, BSD does not (at least traditionally). This difference is not
essential. We'll look at sysvinit only.
When it starts up, init reads the /etc/inittab configuration file. While the
system is running, it will re−read it, if sent the HUP signal (kill −HUP 1); this
feature makes it unnecessary to boot the system to make changes to the init
configuration take effect. The /etc/inittab file is a bit complicated. We'll start
with the simple case of configuring getty lines. Lines in /etc/inittab consist of
four cross field limits.