Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
9 views113 pages

Open Source Technologies

The document outlines a course on Linux and shell programming, detailing objectives, units, and practical lab exercises. It covers Linux history, architecture, shell scripting, decision-making, and system administration. The course includes hands-on scripting tasks to reinforce learning in shell commands and system configuration.

Uploaded by

lokeshtheminer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views113 pages

Open Source Technologies

The document outlines a course on Linux and shell programming, detailing objectives, units, and practical lab exercises. It covers Linux history, architecture, shell scripting, decision-making, and system administration. The course includes hands-on scripting tasks to reinforce learning in shell commands and system configuration.

Uploaded by

lokeshtheminer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 113

CORE X : LINUX AND SHELL PROGRAMMING

COURSE OBJECTIVES
• To understand the basics of Linux OS
• Study the shell programming

UNIT - I

Introduction to Linux: History of Linux – Linux Architecture – Kernel – Uses of Linux – Linux
distributions - Linux Essential Commands – Files and directories - File types - Linux System Standard
Files – The vi Editor.

UNIT - II

Introduction to Shell scripting: Shell – Shell Types – Structure of bash shell script – Script file names
and permissions – Variables: Variable names, Defining and accessing variables, Variable types, Special
variables – Read and Echo commands – Basic operators: Arithmetic Operators, Relational Operators,
Boolean Operators, String Operators and File Test Operators

UNIT - III

Decision Making: if statement, if else statement, elif ladder and case statement- Looping: while loop,
for loop and until loop – break and continue statements – Meta characters -Substitution in expression
and command substitution - Input and Output redirection.

UNIT - IV

Arrays - User-defined functions – Command line arguments – String processing – Process basics –
Commands related with processes – Filter commands.

UNIT - V
Basic System administration: Super User Control – Scheduling tasks using cron – System run levels –
Configuration directories and files – User configuration files – Adding and Removing Users and Groups

TEXT BOOK
1. The Complete Reference LINUX - Richard L. Petersen, McGraw Hill,
2. LINUX shell scripting by Ganesh Naik, Packt Publishing Ltd.,

REFERENCE BOOK
1. Linux Shell Scripting with Bash 1st Edition by Ken O. Burtch

1
PRACTICAL VIII : SHELL PROGRAMMING LAB

COURSE OBJECTIVES
• Simulate the file commands
• Write simple shell scripting

1. Write a shell script for basic arithmetic and logical calculations


2. Write a shell script to demonstrate the file commands: rm, cp, cat, mv, cmp, wc, split, diff using
choice menus (use elif ladder).
3. Write a shell script to show the following system configuration:
a. currently logged user and his log name.
b. current shell, home directory, Operating System type, current Path setting, current
working directory.
c. show currently logged number of users, show all available shells
d. show CPU information like processor type, speed
e. show memory information.
4. Write a Shell Script to demonstrate the following: pipes, Redirection and tee commands.
5. Write a shell script for displaying current date, user name, file listing and directories by getting user
choice (use case statement).
6. Write a shell script to create an array and perform various operations in that array.
7. Write a shell script to demonstrate the filter commands.
8. Write a shell script to remove the files which has file size as zero bytes.
9. Write a shell script to find the sum of the individual digits of a given number.
10. Write a shell script to find the greatest among the given set of numbers using command line
arguments.
11. Write a shell script for palindrome checking.
12. Write a shell script to print the multiplication table of the given argument using for-loop.

2
UNIT - I
Introduction to Linux: History of Linux
History

• UNIX: 1969 Thompson & Ritchie AT&T Bell Labs.


• 1973 Entirely written in ‘C”
• BSD: 1978 Berkeley Software Distribution.
• Commercial Vendors: Sun, HP, IBM, SGI, DEC.
• GNU: 1984 Richard Stallman, FSF. (Recursive acronym for "GNU's Not Unix!).
• POSIX: 1986 IEEE Portable Operating System unIX.
• Minix: 1987 Andy Tannenbaum.
• SVR4 (System V Release 4): 1989 AT&T and Sun.
• Linux: 1991 Linus Torvalds Intel 386 (i386).
• Open Source: GPL.

1969 The The history of UNIX starts back in 1969, when Ken Thompson, Dennis
Beginning Ritchie and others started working on the "little-used PDP-7 in a corner"
at Bell Labs and what was to become UNIX.
1971 First Edition It had an assembler for a PDP-11/20, file system, fork(), roff(text
formatting) and ed. It was used for text processing of patent documents.
1973 Fourth It was rewritten in C. This made it portable and changed the history of
Edition OS's.
1975 Sixth Edition UNIX leaves home. Also widely known as Version 6, this is the first to
be widely available outside of Bell Labs. The first BSD(Berkeley
Software Distribution) version (1.x) was derived from V6.
1979 Seventh It was an "improvement over all preceding and following Unices"
Edition [Bourne]. It had C, UUCP(Unix to Unix Copy Program) and the Bourne
shell. It was ported to the VAX and the kernel was more than 40
Kilobytes (K).
1980 Xenix Microsoft introduces Xenix. 32V and 4BSD introduced.
1982 System III AT&T's UNIX System Group (USG) release System III, the first public
release outside Bell Laboratories. SunOS 1.0 ships. HP-UX introduced.
Ultrix-11 Introduced.
1983 System V Computer Research Group (CRG), UNIX System Group (USG) and a
third group merge to become UNIX System Development Lab. AT&T
announces UNIX System V, the first supported release. Installed base
45,000.
1984 4.2BSD University of California at Berkeley releases 4.2BSD, includes TCP/IP,
new signals and much more. X/Open formed.
1984 SVR2 System V Release 2 introduced. At this time there are 100,000 UNIX
installations around the world.
1986 4.3BSD 4.3BSD released, including internet name server. SVID introduced. NFS
shipped. AIX announced. Installed base 250,000.
1987 SVR3 System V Release 3 including STREAMS, TLI(network software),
RFS(remote file sharing). At this time there are 750,000 UNIX
installations around the world. IRIX introduced.

3
1988 POSIX.1 published. Open Software Foundation (OSF) and UNIX
International (UI) formed. Ultrix 4.2 ships.
1989 AT&T UNIX Software Operation formed in preparation for spinoff of
USL. Motif 1.0 ships.
1989 SVR4 UNIX System V Release 4 ships, unifying System V, BSD and Xenix.
Installed base 1.2 million.
1990 XPG3 X/Open launches XPG3 Brand. OSF/1 debuts. Plan 9 from Bell Labs
ships.
1991 UNIX System Laboratories (USL) becomes a company - majority-
owned by AT&T. Linus Torvalds commences Linux development.
Solaris 1.0 debuts.
1992 SVR4.2 USL releases UNIX System V Release 4.2 (Destiny). October - XPG4
Brand launched by X/Open. December 22nd Novell announces intent to
acquire USL. Solaris 2.0 ships.
1993 4.4BSD 4.4BSD the final release from Berkeley. June 16 Novell acquires USL
Late SVR4.2MP Novell transfers rights to the "UNIX" trademark and the Single UNIX
1993 Specification to X/Open. COSE initiative delivers "Spec 1170" to
X/Open for fasttrack. In December Novell ships SVR4.2MP , the final
USL OEM release of System V
1994 Single UNIX BSD 4.4-Lite eliminated all code claimed to infringe on USL/Novell. As
Specification the new owner of the UNIX trademark, X/Open introduces the Single
UNIX Specification (formerly Spec 1170), separating the UNIX
trademark from any actual code stream.
1995 UNIX 95 X/Open introduces the UNIX 95 branding programme for
implementations of the Single UNIX Specification. Novell sells
UnixWare business line to SCO. Digital UNIX introduced. UnixWare
2.0 ships. OpenServer 5.0 debuts.
1996 The Open Group forms as a merger of OSF and X/Open.
1997 Single UNIX The Open Group introduces Version 2 of the Single UNIX
Specification, Specification, including support for realtime, threads and 64-bit and
Version 2 larger processors. The specification is made freely available on the web.
IRIX 6.4, AIX 4.3 and HP-UX 11 ship.
1998 UNIX 98 The Open Group introduces the UNIX 98 family of brands, including
Base, Workstation and Server. First UNIX 98 registered products
shipped by Sun, IBM and NCR. The Open Source movement starts to
take off with announcements from Netscape and IBM. UnixWare 7 and
IRIX 6.5 ship.
1999 UNIX at 30 The UNIX system reaches its 30th anniversary. Linux 2.2 kernel
released. The Open Group and the IEEE commence joint development
of a revision to POSIX and the Single UNIX Specification. First
LinuxWorld conferences. Dot com fever on the stock markets. Tru64
UNIX ships.
2001 Single UNIX Version 3 of the Single UNIX Specification unites IEEE POSIX, The
Specification, Open Group and the industry efforts. Linux 2.4 kernel released. IT
Version 3 stocks face a hard time at the markets. The value of procurements for
the UNIX brand exceeds $25 billion. AIX 5L ships.

4
2003 ISO/IEC The core volumes of Version 3 of the Single UNIX Specification are
9945:2003 approved as an international standard. The "Westwood" test suite ship
for the UNIX 03 brand. Solaris 9.0 E ships. Linux 2.6 kernel released.
2007 Apple Mac OS X certified to UNIX 03.
2008 ISO/IEC Latest revision of the UNIX API set formally standardized at ISO/IEC,
9945:2008 IEEE and The Open Group. Adds further APIs
2009 UNIX at 40 IDC (International Data Corporation) on UNIX market -- says UNIX
$69 billion in 2008, predicts UNIX $74 billion in 2013
2010 UNIX on the Apple reports 50 million desktops and growing -- these are Certified
Desktop UNIX systems.

Linux Features

• UNIX-like operating system.


• Preemptive multitasking (forcibly take the resources from already allocated processes).
• Virtual memory (storage allocation scheme in which secondary memory can be addressed as
though it were part of the main memory), paging (storage mechanism that allows OS to
retrieve processes from the secondary storage into the main memory in the form of pages).
• Shared libraries.
• Demand loading, dynamic kernel modules (page will be loaded only on demand and not
always).
• Shared copy-on-write executables.
• TCP/IP networking.
• SMP support (processing of programs by multiple processors that share a common operating
system and memory).
• Open source.
• Portable − Portability means software can works on different types of hardware in same way.
Linux kernel and application programs supports their installation on any kind of hardware
platform.
• Open Source − Linux source code is freely available and it is community-based development
project. Multiple teams work in collaboration to enhance the capability of Linux operating
system and it is continuously evolving.
• Multi-User − Linux is a multiuser system means multiple users can access system resources
like memory/ ram/ application programs at same time.
• Multiprogramming − Linux is a multiprogramming system means multiple applications can run
at same time.
• Hierarchical File System − Linux provides a standard file structure in which system files/ user
files are arranged.
• Shell − Linux provides a special interpreter program which can be used to execute commands
of the operating system. It can be used to do various types of operations, call application
programs. etc.
• Security − Linux provides user security using authentication features like password protection/
controlled access to specific files/ encryption of data.

Linux is an Open-Source version of Unix developed by Linus Torvalds and first released on 5 October 1991 to
port Unix to the Intel x86 processor. This made Unix available on the most ubiquitous computer (any device, in
any location, and in any format) hardware that has ever existed, and therefore available to almost everyone. Linux
has since been ported to almost every processor and function one could imagine, including game-boxes, personal
digital assistants (PDAs), personal digital video recorders, and IBM mainframes, expanding the original concept
of Unix for x86 to Unix for everything. Linux isn't the only version of Unix available to most people (also notable
are the various BSD variants, also Open Source), but it's by far the most popular.

5
Linux Architecture

Hardware: Hardware consists of all physical devices attached to the System. For example: Hard disk
drive, RAM, Motherboard, CPU etc.

Kernel: Kernel is the core component for any (Linux) operating system which directly interacts with
the hardware. The kernel is the master program that provides file related activities, process scheduling,
memory management, and various other operating system functions through system calls. In other
words, it controls the resources of the computer system; allocate them to different users and to different
tasks. The major portion of the kernel is written in C language. Therefore, it is easy to understand,
debug, and enhance it. As it is written in C language, therefore, it is portable in nature. In the diagram
kernel is written or placed between hardware and shell, so it works between the two. Moreover, the
kernel maintains various data structure to manage the processes. Each process has its own priority. A
higher priority process is executed first than the lower priority process.

Kernel is divided into two parts:


1. Process management
2. File management

The primary task of the process management is to manage the memory management activities and
process related activities at different states of the execution – creation/deletion of processes, scheduling
of the processes and provision of mechanism for synchronization, communication and deadlock
handling of the processes. The task of the file management is to manage the file related activities. Since
Unix is such kind of operating system which treats the I/O devices as a file. Therefore, each I/O devices
has its own file, known as device drivers, to derive it. The file management part of the kernel handles
these device drivers and store these files in the directory “/dev” under root directory. If one attaches any
new I/O device to the Unix than it is necessary to create a file for that device in “/dev” directory. Then
one need to write down its characteristics; such as its type (character oriented or block oriented), address
of the driver program, memory buffer reserved for the device and some other, in that specific file. Kernel
loads first into memory when an operating system is loaded and remains into memory until operating
system is shut down again.

6
Kernel types:
1. Monolithic Kernel: It is one of types of kernels where all operating system services operate in
kernel space. It has dependencies between systems components. It has huge lines of code which
is complex.
2. Micro Kernel: It is kernel types which has minimalist approach. It has virtual memory and
thread scheduling. It is more stable with less services in kernel space. It puts rest in user space.
3. Hybrid Kernel: It is the combination of both monolithic kernel and mircrokernel. It has speed
and design of monolithic kernel and modularity and stability of microkernel.

Shells: An interface to kernel, hiding complexity of kernel's functions from users. The shell takes
commands from the user and executes kernel's functions. The shell prompt waits for input from users.
Now when user types a command and press Enter key, the shell obtained the command, execute it if
possible and display the prompt symbol again in order to receive the next command. That is why the
shell is also called as the Linux system command interpreter. Moreover, to access the hardware,
developer will request for the shell, the shell will request to kernel, and finally the kernel will request
to hardware. Basically, the shell handles the user interaction with the system. Some built in commands
are part of shell and the remaining commands are separated programs stored elsewhere.

There are four major types of shells that are widely used and are exist in the Linux OS:

1. Bourne shell: This shell was designed by Stephan Bourne of Bell’s Lab. It is most powerful
and most widely used shell. The prompt symbol of Bourne shell is $ (Doller) sign.
2. The C Shell: The C shell was developed at University of California. It is designed by Bill Joy.
C shell gets its name from its programming language, which resembles the C programming
language in syntax. The prompt symbol of C shell is Percent (%) sign.
3. Korn Shell: Like Bourne shell, the Korn shell was also developed at Bell’s Lab of AT & T.
This shell gets its name from its inventor David Korn of Bell’s Lab. Non-root user default
prompt is $, Root user default prompt is #.
4. Bash Shell: The default shell of the Linux system is Bash. It includes features from Korn and
Bourne shell.

Utility/Application programs: the linux system contains large number of utility and application
programs like editors (ed, ex, vi, sed,nano) and so on. These utility programs and the application
programs developed in linux environment are also easily portable to another machine having same
environment.

Linux Kernel

The kernel is the master program that provides file related activities, process scheduling, memory
management, and various other operating system functions through system calls. In other words, it
controls the resources of the computer system; allocate them to different users and to different tasks.
The major portion of the kernel is written in C language. Therefore, it is easy to understand, debug, and
enhance it. As it is written in C language, therefore, it is portable in nature. Modules or sub-systems
that provide the operating system functions is called kernel.
• It is the Core of OS.
• Controls and mediates access to hardware.
• Implements and supports fundamental abstractions:
o Processes, files, devices etc.
• Schedules / allocates system resources:
o Memory, CPU, disk, descriptors, etc.
• Enforces security and protection.
• Responds to user requests for service (system calls). Etc…etc…

7
The kernel acts as an intermediary between the computer hardware and various
programs/application/shell.

Kernel Architecture:

User Space:
The User Space is the space in memory where user processes run.
This Space is protected.
The system prevents one process from interfering with another process.
Only Kernel processes can access a user process

Kernel Space:
The kernel Space is the space in memory where kernel processes run.
The user has access to it only through the system call.

System Call:
User Space and Kernel Space are in different spaces.
When a System Call is executed, the arguments to the call are passed from User Space to Kernel
Space.
A user process becomes a kernel process when it executes a system call.

File System:
It is responsible for storing information on disk and retrieving and updating this information.
The File System is accessed through system calls such as : open, read, write, …
Example:
FAT16, FAT32, NTFS, ext2, ext3, ext4 etc. (Extended File System).

Process Management:
The Unix OS is a time-sharing system.
Every process is scheduled to run for a period of time (time slice).
Kernel create, manage and delete the processes:

8
Every process (except init) in the system is creating as the result of a fork system call.
The fork system call splits a process into two processes (Parent and Child).
Each process has a unique identifier (Process ID).

Device Driver:
On of the purpose of an OS is to hide the system’s hardware from user.
Instead of putting code to manage the HW controller into every application, the code is kept in the
Linux kernel.
It abstracts the handling of devices.
All HW devices look like regular files

Type of devices:
Character devices
A character device is one that can be accessed as a stream of bytes.
Example: Keyboard, Mouse, …
Block devices
A block device can be accessed only as multiples of a block.
Example: disk, …
Network devices
They are created by Linux kernel.

Memory Management:
Physical memory is limited.
Virtual memory is developed to overcome this limitation.

Virtual memory:
Large Address space
Protection
Memory mapping
Fair physical memory allocation
Shared virtual memory
In modern OS, the virtual memory concept is generally executed using demand paging and page
replacement algorithms. The OS may directly execute read/write operations in virtual memory.

Swap memory:
it is a configurable partition on disk treated in a manner similar to memory. Contains inactive
processes or pages. The OS will never execute read/write operations in swap memory.

Network Layers:

1 BSD socket layer


It is a general interface (abstract layer).
Used in networking and IPC.

2. INET socket layer


It supports the Internet address family.
Its interface with BSD socket layer is through a set of operation which is registered with BSD
socket layer.

Type of Inet sockets


Stream Socket
Provide a reliable, sequenced, two-way connection (such as TCP).
Datagram Socket
A connection-less and unreliable connection (such as UDP).
Raw Socket - Used for internal network protocols

9
Uses of Linux
Linux OS manages hardware resources, launches and handles applications, and provides some form of
user interface. The enormous community for developers and wide range of distributions means that a
Linux version is available for almost any task, and Linux has penetrated many areas of computing.
Linux can be used as Server Operating System or as standalone Operating System on a PC (But it is
best suited for Server). As a Server Operating System, it provides different services/network resources
to client. Server OS must be:
• Stable
• Robust
• Secure
• High Performance

Linux has emerged as a popular OS for web servers such as Apache, as well as for network operations,
scientific computing tasks that require huge compute clusters, running databases, desktop and endpoint
computing, and running mobile devices with OS versions like Android.

Linux offers all of the above characteristics plus its Open Source and Free OS. So, Linux can be used
as:

(1) On standalone workstation/PC for word processing, graphics, software development,


internet, e-mail, chatting, small personal database management system etc.

(2) In network environment as:


(A) File and Print or Application Server:
Share the data, Connect the expensive device like printer and share it, e-mail within the
LAN/intranet etc. are some of the applications.

Linux Server with different Client Operating Systems

(B) Proxy/WWW/Mail Server:


Linux sever can be connected to Internet, so that PC's on intranet can share the internet/e-mail
etc. You can put your web server that run your web site or transmit the information on the
internet.

Linux Server can act as Proxy/Mail/WWW/Router Server etc.

10
Headless server OS for systems that do not require a graphical user interface (GUI) or directly connected
terminal and keyboard. Headless systems are often used for remotely managed networking server and
other devices.

Embedded device or appliance OS for systems that require limited computing function. Linux is used
as an embedded OS for a variety of applications, including household appliances, automotive
entertainment systems and network file system appliances.

Network OS for routers, switches, domain name system servers, home networking devices and more.
For example, Cisco offers a version of the Cisco Internetwork Operating System (IOS) that uses the
Linux kernel.

Software development OS for enterprise software development. Although many development tools
have been ported to Windows or other OSes, Linux is home to some of the most widely used open-
source software development tools. For example, git for distributed source control; vim and emacs for
source code editing; and compilers and interpreters for almost every programming language.

Cloud OS for cloud instances. Major cloud computing providers offer access to cloud computing
instances running Linux for cloud servers, desktops and other services.

Linux can be optimized for different purposes such as:

• networking performance;
• computation performance;
• deployment on specific hardware platforms; and
• deployment on systems with limited memory, storage or computing resources.

So, Linux can be used for:

• Personal Work
• Server
• Software Development Workstation
• Workgroup Server
• In Data Center for various server activities such as FTP, Telnet, SSH, Web, Mail, Proxy,
Proxy Cache Appliance etc.
• It is used in clustered environments.

The pros and cons of using Linux


Advantages of Linux:

• Highly Secure
• Stable
• Free and Open Source
• Easy to use
• Absolute Freedom over your system
• High Performance
• Proper use of System Resources
• Privacy-Friendly
• Easily Install Software
• Better Software Updates

11
Disadvantages Of Linux:

• No standard edition
• Hard Learning Curve
• Limited market share
• Lack of proprietary software
• Difficult to troubleshoot
• Poor support for games
• Unsupported Hardware
• Lack of technical support
• No hibernation
• No unified installer/package manager
• Linux distributions

Linux distributions

A Linux distribution is a collection of software applications built on top of the Linux kernel and
operating system. There are many variations between distributions, as each strives to provide a unique
user experience. Broadly, Linux distributions may be:
• Commercial or non-commercial;
• Designed for enterprise users, power users, or for home users;
• Supported on multiple types of hardware, or platform-specific, even to the extent of
certification by the platform vendor;
• Designed for servers, desktops, or embedded devices;
• General purpose or highly specialized toward specific machine functionalities (e.g. firewalls,
network routers, and computer clusters);
• Targeted at specific user groups, for example through language internationalization and
localization, or through inclusion of many music production or scientific computing
packages;
• Built primarily for security, usability, portability, or comprehensiveness.

Sample Distributions:

1. Debian: Debian GNU/Linux is developed and maintained entirely by volunteers. The Advanced Package Tool
a(pt) is very powerful and highly esteemed. It is one of the easiest distributions to keep up to date. The apt-get
mechanism handles package dependencies neatly. Debian is renowned for being a mother to popular Linux
distributions such as Deepin, Ubuntu, and Mint which have provided solid performance, stability, and unparalleled
user experience. The Debian project provides over 59,000 software packages and supports a wide range of PCs
with each release encompassing a broader array of system architectures. It strives to strike a balance between
cutting edge technology and stability. It does not ship with the very latest software applications. Nevertheless, It
is ideal for production servers owing to its stability and reliability.

2. Red Hat: This is probably the most popular Linux distribution in the United States. Red Hat has worked with
a number of vendors to provide Red Hat Linux preinstalled on PCs. Red Hat is application rich and easy for new
Linux users to learn. Red Hat created the Red Hat Package Manager (RPM) system, a system similar to apt to
keep software up to date, which is used by many distributions. The Red Hat p2date mechanism provides a package
dependency resolution similar to apt-get. For what it's worth, apt-get has been ported to Red Hat, and you can
update your distribution in that manner. Red Hat boasts a number of high-profile Open-Source software
employees. Red Hat is usually a top choice for server environments given its stability and regular security patches
which boost its overall security. RHEL (Red Hat Enterprise Linux) is subscription-based and the subscription is

12
renewed annually. You can purchase a license for an array of subscription models such as Linux Developer
Workstation, Linux developer suite, and Linux for Virtual Datacenters.

3. Gentoo: Gentoo is built for professional use and experts who take into consideration what packages
they are working with from the word go. This category includes developers, system & network
administrators. As such, it’s not ideal for beginners in Linux. Gentoo comes recommended for those
who want to have a deeper understanding of the ins and outs of the Linux operating system.

4. Ubuntu: Created and maintained by Canonical, Ubuntu is one of the most popular Linux distros
enjoyed across the globe by beginners, intermediate users, and professionals alike. Ubuntu was
specifically designed for beginners in Linux or those transitioning from mac and Windows. Ubuntu
Created and maintained by Canonical, Ubuntu is one of the most popular Linux distros enjoyed across
the globe by beginners, intermediate users, and professionals alike. Ubuntu was specifically designed
for beginners in Linux or those transitioning from mac and Windows. By default, Ubuntu ships with
GNOME desktop environment with every day out-of-the-box applications such as Firefox, LibreOffice,
and image editing applications such as GIMP, music players, and video players such as Audacious and
Rhythmbox. The latest version is Ubuntu 20.04 LTS codenamed Focal Fossa. Ubuntu forms the basis
of several other Linux distributions. Some of the distributions based on Ubuntu 20.04 include Lubuntu
20.04 LTS, Kubuntu 20.04, and Linux Mint 20.04 LTS (Ulyana).

5. CentOS: The CentOS Project is a community-driven free operating system that aims at delivering a
robust and reliable open-source ecosystem. Based on RHEL, CentOS is a perfect alternative to Red Hat
Enterprise Linux since it is free to download and install. It gives users the stability and reliability of
RHEL while allowing them to enjoy free security and feature updates. CentOS 8 is a favorite among
Linux enthusiasts who want to savour the benefits of RHEL.

6. SuSE: The most popular distribution in Europe, especially Germany, SuSE has become more popular in the
United States as well. It uses a variant of the RPM package format and includes a sophisticated system
configuration tool called YaST (Yet another Setup Tool) to make administration easier. SuSE supports many types
of hardware and many configurations not available in other distributions, and includes several security scripts and
tools that can be run to inform you of problems.

7. Mandrake: Mandrake took the Red Hat distribution, added an easy-to-use installer, and changed the default
desktop from Gnome (GNU Network Object Model Environment) to KDE, thus arriving at one of the more
popular distributions. It has a reputation for being

8. Fedora: Fedora has enjoyed a reputation for being one of the most user-friendly distros for quite a
while now owing to its simplicity and out-of-the-box applications which enable newcomers to easily
get started. It’s a powerful and flexible operating system that’s tailored for desktops & laptops, servers,
and even for IoT ecosystems. Fedora, just like CentOS, is based on Red Hat and is in fact, a testing
environment for Red Hat before transitioning to the Enterprise phase. As such, it’s usually used for
development and learning purposes and comes in handy for developers and students.

9. Kali Linux: Kali Linux (formerly known as BackTrack Linux) is an open-source, Debian-based
Linux distribution aimed at advanced Penetration Testing and Security Auditing. Kali Linux contains
several hundred tools targeted towards various information security tasks, such as Penetration Testing,
Security Research, Computer Forensics and Reverse Engineering. Kali Linux is a multi-platform
solution, accessible and freely available to information security professionals and hobbyists.

10. Slackware: One of the first Linux distributions, Slackware is still used by many hard-core Linux users. It
contains user-friendly interfaces similar to other Linux distributions, but it generally goes for "power over pretty."

13
Linux Essential Commands

S.No. Command Description Usage/Example


1 man Show the manual for commands (help) $man cd
2 mkdir Creates a new directory $mkdir ramu
3 ls List directory contents. -l for long listing $ls $ls -l
4 cd Change directory $cd ramu
5 pwd Print working directory $pwd
cp Copy files $cp abc.txt abc.bak
7 mv Move files (renames) $mv abc.bak abc.txt
8 rm Remove files or directories $rm abc.txt
-r recursively delete folder/files $rm -r tmp
9 find search for files in a directory tree $find -name abc.txt
10 history List recently used commands $history
11 cat Displays file contents in the prompt itself $cat abc.txt
12 echo Displays a line of text $echo “how are you”
13 grep Searches for the pattern in the files $grep -i apple sample.txt
14 wc Prints number of lines, words and $wc file1.txt
characters in a file
15 sort Sort lines of text files $sort abc.txt
16 chmod change file access permissions $chmod 755 abc.txt
1-execute permission 7 for owner rwx
2-write permission 5 for group r_x
4-read permission 5 for others r_x
17 chown Change file owner and group $chown root myfile.txt
18 who Who is logged in $who
19 ps List process currently running in. -ef for $ps
detailed listing $ps -ef
20 kill Kills an user process $kill -9 1234
21 df Report file system disk space usage $df
22 exit Exits out of the current program, terminates $exit
the current command line terminal, or logs
out based on the context
23 date Prints the current date and time $date
24 clear Clears the screen $clear
25 passwd Change the password of the current user $passwd
25 sudo or su Do the command execution as a super user. $sudo gedit /etv/hostname
Super user do.
26 head Display top n lines of the given file. Default $head -n 5 filename.ext
is 10.
27 tail Display last n lines of the given file. $tail -n 5 filename.ext
Default is 10.
28 ping ping command is used to check the $ping 127.23.29.152
connectivity status to a server.
29 wget wget command is used to download files $wget http://abc.com/file1.gz
from internet.
30 uname Displays Unix name $uname
31 ip addr Displays the ip address and other network $ip addr
related details

14
Files and directories

File in Linux:

Files are collection of data items stored on disk. Or, it's device which can store the information, data,
music (mp3 files), picture, movie, sound, book etc. In fact, whatever we store in computer it must be
inform of file. Files are always associated with devices like hard disk, floppy disk etc. File is the last
object in the file system tree.

Directory in Linux:

Directory is group of files. Directory is divided into two types:

• Root directory - Strictly speaking, there is only one root directory in your system, which is
denoted by / (forward slash). It is root of your entire file system and cannot be renamed or
deleted.
• Sub directory - Directory under root (/) directory is subdirectory which can be created,
renamed by the user.

Directories are used to organize data files, programs more efficiently.

For convenience, the Linux file system is usually thought of in a tree structure. On a standard Linux
system, one will find the layout generally follows the scheme presented below.

File System:
The UNIX file system is a hierarchical arrangement of directories and files. Everything starts in the directory
called root whose name is the single character /. A directory is a file that contains directory entries. Logically, one
can think of each directory entry as containing a filename along with a structure of information describing the
attributes of the file. The attributes of a file are such things as type of file regular file, directory, the size of the
file, the owner of the file, permissions for the file whether other users may access this file and when the file was
last modified. The stat and fstat functions return a structure of information containing all the attributes of a file.

FileName:
The names in a directory are called filenames. The only two characters that cannot appear in a filename are the
slash character (/) and the null character. The slash separates the filenames that form a pathname and the null
character terminates a pathname. Two filenames are automatically created whenever a new directory is created: .
(called dot) and .. (called dot-dot). Dot refers to the current directory, and dot-dot refers to the parent directory. In
the root directory, dot-dot is the same as dot.

Pathname:
A sequence of one or more filenames separated by slashes and optionally starting with a slash, forms a pathname.
A pathname that begins with a slash is called an absolute pathname ; otherwise, it's called a relative pathname.
Relative pathnames refer to files relative to the current directory. The name for the root of the file system (/) is a
special-case absolute pathname that has no filename component.
Working Directory
Every process has a working directory, sometimes called the current working directory. This is the directory from
which all relative pathnames are interpreted. A process can change its working directory with the chdir function.

Home Directory
When we log in, the working directory is set to our home directory. Our home directory is obtained from our
entry in the password file.

15
Linux file system layout:

Subdirectories of the root directory:

Directory Content
/bin Common programs, shared by the system, the system administrator and the users.
The startup files and the kernel, vmlinuz. In some recent distributions also grub data.
/boot Grub is the GRand Unified Boot loader and is an attempt to get rid of the many different
boot-loaders we know today.
Contains references to all the CPU peripheral hardware, which are represented as files
/dev
with special properties.
Most important system configuration files are in /etc, this directory contains data similar
/etc
to those in the Control Panel in Windows
/home Home directories of the common users.
/lib Library files, includes files for all kinds of programs needed by the system and the users.
/mnt Standard mount point for external file systems, e.g. a CD-ROM or a digital camera.
The administrative user's home directory. Mind the difference between /, the root
/root
directory and /root, the home directory of the root user.
/sbin Programs for use by the system and the system administrator.
Temporary space for use by the system, cleaned upon reboot, so don't use this for saving
/tmp
any work!
/usr Programs, libraries, documentation etc. for all user-related programs.
Storage for all variable files and temporary files created by users, such as log files, the
/var mail queue, the print spooler area, space for temporary storage of files downloaded from
the Internet, or to keep an image of a CD before burning it.

File types

In Linux, everything is considered as a file. In Linux, seven standard file types are regular, directory,
symbolic link, FIFO special, block special, character special, and socket. In Linux/UNIX, we have to
deal with different file types to manage them efficiently.

16
In Linux/UNIX, Files are mainly categorized into 3 parts:

1. Regular Files
2. Directory Files
3. Special Files

The easiest way to find out file type in any operating system is by looking at its extension such as .txt,
.sh, .py, etc. If the file doesn’t have an extension, then in Linux, we can use file utility.

To find out file types we can use the file command.

Syntax: file [OPTION…] [FILE…]

We can test a file type by typing the following command:

$file file.txt

Using the -s option we can read the block or character special file.

$file -s /dev/sda

Using -L option will follow symlinks (default if POSIXLY_CORRECT is set):

file -L stdin

We can also use ls command to determine a type of file.

Syntax:

ls [OPTION]... [FILE]...

The following table shows the types of files in Linux and what will be output using ls and file command:

File Type Command to Located in The file type FILE


create the using “ls -l” command
File is denoted output
using
Regular File touch Any – PNG Image
directory/Folder data, ASCII
Text, RAR
archive data,
etc
Directory File mkdir It is a directory d Directory
Block Files fdisk /dev b Block special
Character mknod /dev c Character
Files special
Pipe Files mkfifo /dev p FIFO
Symbolic ln /dev l Symbolic link
Link Files to <linkname>
Socket Files socket() /dev s Socket
system call

17
Types of Files and Explanation

a) Regular Files: Regular files are ordinary files on a system that contains programs, texts, or data. It
is used to store information such as text, or images. These files are located in a directory/folder. Regular
files contain all readable files such as text files, Docx files, programming files, etc, Binary files, image
files such as JPG, PNG, SVG, etc, compressed files such as ZIP, RAR, etc.

Example:
ls -l
Or we can use the “file *” command to find out the file type
file *

b) Directory Files: The sole job of directory files is to store the other regular files, directory files, and
special files and their related information. A directory file contains an entry for every file and sub-
directory that it houses. If we have 10 files in a directory, we will have 10 entries in the directory file.
We can navigate between directories using the cd command.

We can find out directory file by using the following command or using the file command:

ls -l | grep ^d

c) Special Files:

c.1) Block Files: Block files act as a direct interface to block devices hence they are also called block
devices. A block device is any device that performs data Input and Output operations in units of blocks.
These files are hardware files and most of them are present in /dev.

We can find out block file by using the following command or using the file command:

ls -l | grep ^b

c.2) Character device files: A character file is a hardware file that reads/writes data in character by
character in a file. These files provide a serial stream of input or output and provide direct access to
hardware devices. The terminal, serial ports, etc are examples of this type of file.

We can find out character device files by below command or using file command:

ls -l | grep ^c

c.3) Pipe Files: The other name of pipe is a “named” pipe, which is sometimes called a FIFO. FIFO
stands for “First In, First Out” and refers to the property that the order of bytes going in is the same
coming out. The “name” of a named pipe is actually a file name within the file system. This file sends
data from one process to another so that the receiving process reads the data first-in-first-out manner.

We can find out pipe file by using the following command or using the file command:

ls -l | grep ^p

c.4) Symbol link files: A symbol link file is a type of file in Linux which points to another file or a
folder on our device. Symbol link files are also called Symlink and are similar to shortcuts in Windows.

We can find out Symbol link file by using the following command or using file command:

ls -l | grep ^l

18
c.5) Socket Files: A socket is a special file that is used to pass information between applications and
enables the communication between two processes. We can create a socket file using the socket()
system call. A socket file is located in /dev of the root folder or you can use the find / -type s command
to find socket files.

find / -type s

We can find out socket file by using the following command or using file command:

ls -l | grep ^s

Linux System Standard Files

The standard configuration files on a Linux system controls user permissions, system applications, daemons,
services, and other administrative tasks in a multi-user, multi-tasking environment. These tasks include managing
user accounts, allocating disk quotas, managing e-mails and newsgroups, and configuring kernel parameters.
There are some files within the home directory that are ordinarily hidden. Hidden files have names that begin
with a period; hence, they have been given the nickname of dot files. Hidden files are not displayed by the ls
command unless the –a option is used in the format of ls –a.

S.No File Description


1 .bash_history For users of the bash shell, a file containing up to 500 of the most recent commands
available for recall using the up and down arrow keys.
2 .bash_logout Script that is run by the bash shell when the user logs out of the system
3 .bash_profile Initialization script that is run by the bash shell upon login in order to setup variables
and aliases.
4 .bashrc Initialization script executed whenever the bash shell is started in some way other
than a login shell.
5 .login The initialization script that is run whenever a user login occurs.
6 .logout The script that is automatically run whenever a user logout occurs.
7 .profile Put default system-wide environment variables in /etc/profile.
8 /boot/vmlinuz The Linux kernel file.
9 /dev/hda Device driver for hard disk
10 /etc/group Holds information regarding security group definitions.
11 /etc/hosts Contains host names and their corresponding IP addresses used for name resolution
whenever a DNS server is unavailable
12 /etc/inittab Describes how the INIT process should set up the system in various runlevels
13 /etc/passwd Contains information regarding registered system users. Passwords are typically
kept in a shadow file for better security.
14 /proc/mounts Displays currently mounted file systems
15 /proc/version Contains Linux version information

The vi Editor

The default editor that comes with the Unix/Linux operating system is called vi (visual editor).
[Alternate editors for Linux environments include pico, emacs, nano and Gedit ].

Note: Both Linux and vi are case-sensitive.

The vi is generally considered the de facto standard in Unix/Linux editors because:


• It's usually available on all the flavours of Unix/Linux system.
• Its implementations are very similar across the board.

19
• It requires very few resources.
• It is more user friendly than any other editors like ed or ex.

We can use vi editor to edit an existing file or to create a new file from scratch. We can also use this
editor to just read a text file.

Starting the vi Editor:

There are following way you can start using vi editor:


Command Description

vi filename Creates a new file if it already does not exist, otherwise opens
vi -R existing
Opens anfile.
existing file in read only mode.
filename is the example to create a new file testfile if it already does not exist in the current working
Following
directory:

$vi testfile
As a result a screen something like as follows will be shown in the monitor:
|
~
~
"testfile" [New File]
There is a tilde (~) on each line following the cursor. A tilde represents an unused line. If a line does
not begin with a tilde and appears to be blank, there is a space, tab, newline, or some other non-viewable
character present.

Operation Modes:
There are two operation modes available:
1. Command mode: This mode enables to perform administrative tasks such as saving files,
executing commands, moving the cursor, cutting (yanking) and pasting lines or words, and
finding and replacing. In this mode, whatever user types are interpreted as a command.
2. Insert mode: This mode enables user to insert text into the file. Everything that's typed in this
mode is interpreted as input and finally it is put in the file.

The vi always starts in command mode. To enter text, user must be in insert mode. To come in insert
mode, one simply type i. To get out of insert mode, user need to press the Esc key, which will put user
back into command mode.

Hint: Pressing the Esc key twice, makes user be in command mode.

Moving within a File (Navigation):

To move around within a file without affecting text must be in command mode (press Esc twice). Here
are some of the commands can be used to move around one character at a time.

Commands and their Description

k : Moves the cursor up one line.


j : Moves the cursor down one line.
h : Moves the cursor to the left one character position.
l : Moves the cursor to the right one character position.
0 or | : Positions cursor at beginning of line.
$ : Positions cursor at end of line.

20
W : Positions cursor to the next word.
B : Positions cursor to previous word.
( : Positions cursor to beginning of current sentence.
) : Positions cursor to beginning of next sentence.
H : Move to top of screen.
nH : Moves to nth line from the top of the screen.
M : Move to middle of screen.
L : Move to bottom of screen.
nL : Moves to nth line from the bottom of the screen.
colon along with x : Colon followed by a number would position the cursor on line number represented
by x.

Control Commands (Scrolling):

There are following useful commands which can used along with Control Key:

Commands and their Description:

CTRL+d : Move forward 1/2 screen.


CTRL+f : Move forward one full screen.
CTRL+u : Move backward 1/2 screen.
CTRL+b : Move backward one full screen.
CTRL+e : Moves screen up one line.
CTRL+y : Moves screen down one line.
CTRL+u : Moves screen up 1/2 page.
CTRL+d : Moves screen down 1/2 page.
CTRL+b : Moves screen up one page.
CTRL+f : Moves screen down one page.
CTRL+I : Redraws screen.

Editing and inserting in Files (Entering and Replacing Text):

To edit the file, we need to be in the insert mode. There are many ways to enter insert mode from the
command mode.

i : Inserts text before current cursor location.


I : Inserts text at beginning of current line.
a : Inserts text after current cursor location.
A : Inserts text at end of current line.
o : Creates a new line for text entry below cursor location.
O : Creates a new line for text entry above cursor location.
r : Replace single character under the cursor with the next character typed.
R : Replaces text from the cursor to right.
s : Replaces single character under the cursor with any number of characters.
S :Replaces entire line.

Deleting Characters:

Here is the list of important commands which can be used to delete characters and lines in an opened
file.

X Uppercase: Deletes the character before the cursor location.


x Lowercase : Deletes the character at the cursor location.
dw : Deletes from the current cursor location to the next word.
d^ : Deletes from current cursor position to the beginning of the line.

21
d$ : Deletes from current cursor position to the end of the line.
dd : Deletes the line the cursor is on.

Copy and Paste Commands:

Copy lines or words from one place and paste them on another place by using the following commands.

yy : Copies the current line.


9yy : Yank current line and 9 lines below.
p : Puts the copied text after the cursor.
P : Puts the yanked text before the cursor.

Save and Exit Commands of the ex Mode:

Need to press [Esc] key followed by the colon (:) before typing the following commands:

q : Quit
q! : Quit without saving changes i.e. discard changes.
r fileName : Read data from file called fileName.
wq : Write and quit (save and exit).
w fileName : Write to file called fileName (save as).
w! fileName : Overwrite to file called fileName (save as forcefully).
!cmd : Runs shell commands and returns to Command mode.

Searching and Replacing:

Searching for text: The / command searches through the file top to bottom, and then wraps from the
end to the beginning. The ? command searches through the file in the reverse direction going from
bottom to top, and then wraps from the top (beginning) back to the bottom of the file (end).
Note:
Repeat the previous search using the n command.
Repeat the previous search in the opposite direction using the N also called SHIFT-N command.

Replace the first occurrence of “OLD” found on the current line with “NEW” using
:s/OLD/NEW
Replace all occurrences of “OLD” on current line with “NEW” using /g, for example
:s/OLD/NEW/g
Replace between two lines including those lines using :#,#s/, for example
:#,#s/OLD/NEW/g
Replace every occurrence of “OLD” with “NEW” within the file using :%s, for example
:%s/OLD/NEW/g

Examples:
/abc
?abc
:s/Dagac/Dr. Ambedkar College
:s/Dagac/Dr. Ambedkar College/g
:2,3s/abc/xyz/g
:%s/[a-zA-Z]b/ /g

22
UNIT - II

Introduction to Shell scripting: Shell – Shell Types – Structure of bash shell script – Script file names
and permissions – Variables: Variable names, Defining and accessing variables, Variable types, Special
variables – Read and Echo commands – Basic operators: Arithmetic Operators, Relational Operators,
Boolean Operators, String Operators and File Test Operators

Shell Introduction

Linux Shell: Computer understand the language of 0's and 1's called binary language, in early days of
computing, instructions are provided using binary language, which is difficult for all of us, to read and
write. So, in OS there is special program called Shell. Shell accepts instruction or commands in English
and translates it into computers native binary language. Shell Script is series of command written in
plain text file. Shell script is just like batch file is MS-DOS but have more power than the MS-DOS
batch file

The command typed is converted as:

Its environment provided for user interaction. Shell is a command language interpreter that executes
commands read from the standard input device (keyboard) or from a file. Linux may use one of the
following most popular shells.

Any of the above shell reads command from user (via Keyboard or Mouse) and tells Linux O/s what
users want. If we are giving commands from keyboard it is called command line interface ( Usually in-
front of $ prompt, This prompt is depend upon your shell and Environment that you set or by your
System Administrator, therefore you may get different prompt ).

NOTE: To find your shell type following command

$ echo $SHELL

23
Structure of a Shell script:

#!/bin/bash
#
# Script Name: mytest.sh
#
# Author: Name of creator
# Date : Date of creation
#
# Description: The following script reads in a text file called /path/to/file
# and creates a new file called /path/to/newfile
#
# Run Information: This script is run automatically every Monday of every week at 20:00hrs from
# a crontab entry.
#
# Error Log: Any errors or output associated with the script can be found in /path/to/logfile
#

The first line of any script should be a line that specifies what interpreter is to be used for this script.
This line is commonly known as "hash bang" or "shebang". The first two characters of this line are #!
followed by the path of an interpreter to use. In our examples we will be using the following #!/bin/bash.

#!/bin/bash

comments
Comments can be made on any line of a script. Comments can even be on the same line as a command
as long as they follow the command in question. Comments are easily identified as they always begin
with the hash character #. The only exception to this rule is the very first line.

Writing shell script:

Following steps are required to write shell script:

(1) Use any editor like vi or mcedit to write shell script.


(2) After writing shell script set execute permission for the script as follows
syntax:
chmod permission your-script-name
Examples:
$ chmod +x your-script-name
$ chmod 755 your-script-name
Note: This will set read write execute(7) permission for owner, for group and other
permission is read and execute only(5).

(3) Execute your script as


syntax:
bash your-script-name
sh your-script-name
./your-script-name
Examples:
$ bash abc.sh
$ sh abc.sh
$ ./abc.sh

24
Syntax: abc.sh

#!/bin/bash
# Script to print user information who currently login , current date & time
clear
echo "Hello $USER"
echo "Today is ";date
echo "Number of user login : " ; who | wc -l
echo "Calendar"
cal

Variables:

In Linux (Shell), there are two types of variables:

(1) System variables - Created and maintained by Linux itself. This type of variable is defined in
CAPITAL LETTERS.

(2) User defined variables (UDV) - Created and maintained by user. This type of variable defined in
lower letters.

System Variables:

One can see system variables by giving command like $variablename, some of the important System
variables are:

System Variable Meaning


BASH=/bin/bash Our shell name
BASH_VERSION=5.0.17(1) Our shell version name
COLUMNS=80 No. of columns for our screen
HOME=/home/dagac Our home directory
LINES=25 No. of columns for our screen
LOGNAME=students students Our logging name
OSTYPE=Linux Our Os type
PATH=/usr/bin:/sbin:/bin:/usr/sbin Our path settings
PS1=[\u@\h \W]\$ Our prompt settings
PWD=/home/students/Common Our current working directory
SHELL=/bin/bash Our shell name
USERNAME=dagac User name who is currently login to this PC

We can print any of the above variables contains as follows:


$ echo $USERNAME
$ echo $HOME

25
User defined variables (UDV)

User-defined variables are variables which can be created by the user and exist in the session. This
means that no one can access user-defined variables that have been set by another user, and when the
session is closed these variables expire.

To define UDV use following syntax

Syntax:
variable name=value
'value' is assigned to given 'variable name' and Value must be on right side = sign.

Example:
$ no=10# this is ok
$ 10=no# Error, NOT Ok, Value must be on right side of = sign.
To define variable called 'transport' having value Bus
$ transport=Bus
To define variable called n having value 10
$ n=10

Note: the $ in the above example denotes the shell prompt. If you qualify a variable with $ sign like
$m then shell will raise an error. But while accessing the variable, it is necessary to qualify with $
sign.
Example
echo $m

Variable naming rules:


• Variable name must begin with Alphanumeric character or underscore character (_), followed
by one or more Alphanumeric character
• Don't put spaces on either side of the equal sign when assigning value to variable.
• Variables are case-sensitive
• Do not use ?,* etc, to name the variable names

To print contains of variable 'transport' type


$ echo $transport
It will print 'Bus',
To print contains of variable 'n' type command as follows
$ echo $n

Special Variables:

Special variables are reserved for specific functions. We cannot use them as normal variables by
assigning values in the right-hand side using operator '='.

$0 - The filename of the current script.


$n - These variables correspond to the arguments with which a script was invoked. Here n is a
positive decimal number corresponding to the position of an argument (the first argument is $1, the
second argument is $2, and so on).
$# - The number of arguments supplied to a script.
$* - All the arguments are double quoted. If a script receives two arguments, $* is equivalent to $1
$2.

26
$@ - All the arguments are individually double quoted. If a script receives two arguments, $@ is
equivalent to $1 $2.
$? - The exit status of the last command executed.
$$ - The process number of the current shell. For shell scripts, this is the process ID under which they
are executing.
$! - The process number of the last background command.
$- Returns the flags used in the current Bash shell. $- contains the shell’s flags in use in the terminal.
These flags determine the function of the current shell.
-s : -s is the short form of stdin. This reads commands from stdin.
-m : -m is the short form of monitor. This enables job control.
-i : -i is the short form of interactive. It means the shell currently in use is interactive.
-n : -n is the short form of noexec. It means you can only read commands in a script and cannot
execute them.

Example:

#!/bin/bash
#Special variables demo

echo "File Name of the Script: $0"


echo "First Parameter : $1"
echo "Second Parameter : $2"
echo "Third Parameter : $3"
echo "Total Number of Arguments : $#"
echo "All the arguments individually quoted: $@"
echo "All the arguments are Quoted: $*"
echo "Process number of the current shell: $$"
ls
echo "Exit status of the last command executed (ls): $?"

Output:

dagac@dagac-VirtualBox:~/shell$ ./specialvar.sh one two three


File Name of the Script: ./specialvar.sh
First Parameter : one
Second Parameter : two
Third Parameter : three
Total Number of Arguments : 3
All the arguments individually quoted: one two three
All the arguments are Quoted: one two three
Process number of the current shell: 2283
specialvar.sh
Exit status of the last command executed (ls): 0

27
Read and Echo Commands

read command:

read is a bash built-in command that reads a line from the standard input and split the line into words.
The first word is assigned to the first name, the second one to the second name, and so on.

The general syntax of the read built-in takes the following form:

read [options] [name...]

Below shell script read two values and assign them to var1 and var2.

#!/bin/bash
read var1 var2
echo "First variable is : $var1"
echo "Second variable is : $var2"

dagac@dagac-VirtualBox:~/shell$ ./read.sh
hello world
First variable is : hello
Second variable is : world
dagac@dagac-VirtualBox:~/shell$ ./read.sh
hello how are you
First variable is : hello
Second variable is : how are you

It is possible to use standard input to pass values to the read command.

echo "Hello, World!" | (read var3 var4; echo -e "$var3 \n$var4")

Prompt String:

When writing interactive bash scripts, a prompt string can be displayed using the -p option. The prompt
is printed before the read is executed and doesn’t include a newline. If we don't pass any variable with
the read command, then we can pass a built-in variable called REPLY (should be prefixed with the $
sign) while displaying the input.

#!/bin/bash
read -r -p "Are you sure?"
echo "You typed ${REPLY}"

Silent mode:

Silent mode is used to hide the values typed in the prompt. It is mainly used in giving input to passwords.

Example:

#!/bin/bash
read -p "username : " userid
read -sp "password : " passwd
echo
echo "username : " $userid
echo "password : " $passwd

28
Reading arrays:

The -a option is used to read arrays using read command.


#!/bin/bash
echo "Enter three names : "
read -a names
echo "The entered names are ${names[0]} ${names[1]} ${names[2]}"

Output:

Enter three names :


one two three
The entered names are one two three

echo command:

The echo command is a built-in Linux command that prints out arguments as the standard output.
echo is commonly used to display text strings or command results as messages.

The syntax is:

echo [option] [string]

For example, use the following command to print Hello, World! as the output:

echo Hello, World!

Options Description
-n do not print the trailing newline.
-e enable interpretation of backslash escapes.
\b backspace
\\ backslash
\n new line
\r carriage return
\t horizontal tab
\v vertical tab
\c Omits any output following the escape character

Changing the Output Format:

Using the -e option allows us to use escape characters. These special characters make it easy to
customize the output of the echo command.

For example, the \c shorten the output by omitting the part of the string that follows the escape
character:

echo -e 'Hello, World! \c This is DAGAC!'


Hello, World!

Use \n any time you want to move the output to a new line:

echo -e 'Hello, \nWorld, \nthis \nis \nDAGAC!'

29
Hello,
World,
this
is
DAGAC!

Add horizontal tab spaces by using \t:

echo -e 'Hello, \tWorld!'


Use \v to create vertical tab spaces:

echo -e 'Hello, \vWorld, \vthis \vis \vDAGAC!'

Writing to a File:

Use > or >> to include the string in an echo command in a file, instead of displaying it as output:

echo -e 'Hello, World! \nThis is DAGAC!' >> dagac.txt

If the specified text file doesn’t already exist, this command will create it. Use the cat command to
display the content of the file:

cat dagac.txt

Displaying Variable Values


The echo command is also used to display variable values as output. For instance, to display the name
of the current user, use:

echo $USER
echo $var1

Displaying Command Outputs:

The echo command allows you to include the result of other commands in the output:

echo "[string] $([command])

Where:

[string]: The string you want to include with echo


[command]: The command you want to combine with the echo command to display the result.

For example, list all the files and directories in the Home directory by using:

echo "This is the list of directories and files on this system: $(ls)"

30
Basic operators

a) Arithmetic Operators:

An operator that performs arithmetic operations on operands are called arithmetic operators. Linux
support various arithmetic operators as below:

Operators Description
+ Addition Add two operands on both sides of the operator.
- Subtraction Subtracts right hand operand from left hand operand.
* Multiplication Multiply two operands on both sides of the operator. Note: use \* while
using multiplication operation.
/ Division Divides left hand operand by right hand operand.
% Modulus Divides left hand operand by right hand operand and returns remainder.
= Assignment Assign right hand operand or value to left hand operand.
** Exponentiation Raises operand1 to the power operand2.
+= Increment and assign Add value to the operand and store the result in the same operand.
-= Decrement and assign Subtract value from the operand and store the result in the same
operand.
*= Multiply and assign Multiply value to the operand and store the result in the same operand.
/= Divide and assign Divide value from the operand and store the result in the same operand.
%= Modulo divide and Modulo divide value from the operand and store the result in the same
assign operand.

In Linux, it is possible to do arithmetic operation in three methods.

1. Using the (()) operators: Arithmetic operations can be performed by placing the arithmetic expression
between (( and )).

Syntax:

((operand1 operator operand2))

2. The expr (expression evaluator) also can be used to evaluate an arithmetic expression. There must be
spaces between the operators and the expressions. For example, 1+2 is not correct; it should be written
as 1 + 2. Complete expression should be enclosed between ` `, called the inverted commas. ` key is
available below the escape key and it is not the single quotation ‘.

Syntax:

result=`expr opeand1 operator operand2`

Note: You can combine more operators and operands in an expression.

For example $num1 + $num2 - $num3.

3. Let Construction:
Let is a built-in command of Bash that allows users to perform arithmetic operations. It follows the
basic format:

Syntax

let <arithmetic expression>

31
Example using let:

x=10
y=5
echo "Addition of x and y"
let "z = $(( x + y ))"
echo "z= $z"

Example: arith1.sh

#!/bin/bash
num1=20
num2=45
sum=$((20+45))
echo "First method sum = $sum"
((sum=20+45))
echo "Second method sum = $sum"
sum=$((num1+num2))
echo "Third method sum = $sum"
((sum=num1+num2))
echo "Fourth method sum = $sum"
sum=`expr $num1 + $num2`
echo "Using expr keyword sum = $sum"

Output:

First method sum = 65


Second method sum = 65
Third method sum = 65
Fourth method sum = 65
Using expr keyword sum = 65

Example 2: arith2.sh

#!/bin/bash
a=15
b=40
echo "Examples using expr:"
val=`expr $a + $b`
echo "a + b : $val"
val=`expr $a - $b`
echo "a - b : $val"
val=`expr $a \* $b`
echo "a * b : $val"
val=`expr $a / $b`
echo "a / b : $val"
val=`expr $a % $b`
echo "a % b : $val"
echo "Examples using double paranthesis:"
x=3
y=4
echo "Exponentiation of x,y"
echo $(( $x ** $y ))
echo "Incrementing x by 10, then x= "
(( x += 10 ))

32
echo $x
echo "Decrementing x by 3, then x= "
(( x -= 3 ))
echo $x
echo "Multiply of x by 2, then x="
(( x *= 2 ))
echo $x
echo "Dividing x by 3, x= "
(( x /= 3 ))
echo $x
echo "Remainder of Dividing x by 4, x="
(( x %= 4 ))
echo $x
echo "Increment x using ++ operator x="
(( ++x ))
echo $x
echo "Decrement x using -- operator x="
(( --x ))
echo $x

Output:

dagac@dagac-VirtualBox:~/shell$ ./arith2.sh
Examples using expr:
a + b : 55
a - b : -25
a * b : 600
a/b:0
a % b : 15
Examples using double paranthesis:
Exponentiation of x,y
81
Incrementing x by 10, then x=
13
Decrementing x by 3, then x=
10
Multiply of x by 2, then x=
20
Dividing x by 3, x=
6
Remainder of Dividing x by 4, x=
2
Increment x using ++ operator x=
3
Decrement x using -- operator x=
2

b) Relational Operators:

The relational operators compare the first operand to the second, to test the condition that the operator
specifies. They return either true or false based on the relation. These operators cannot be used with
strings. Strings have special operators for comparison.

33
Operator Description Alternate Operator
-eq Checks if the value of two operands are equal or not; if yes, then ==
the condition becomes true.
-ne Checks if the value of two operands are equal or not; if values !=
are not equal, then the condition becomes true.
-gt Checks if the value of left operand is greater than the value of >
right operand; if yes, then the condition becomes true.
-lt Checks if the value of left operand is less than the value of right <
operand; if yes, then the condition becomes true.
-ge Checks if the value of left operand is greater than or equal to the >=
value of right operand; if yes, then the condition becomes true.
-le Checks if the value of left operand is less than or equal to the <=
value of right operand; if yes, then the condition becomes true.

The condition expression can be placed between square brackets with appropriate blank spaces around
operands and operators or it is possible to use double brackets. In bash shell we can use the operators
like <, > etc. shown in the above table.

Syntax:
[ $a -eq $b ] or (( $a -rq $b ))

Example: relat.sh
#!/bin/bash
read -p 'Enter a : ' a
read -p 'Enter b : ' b
echo "Relational operator equal using square brackets"
if [ $a -eq $b ]
then
echo "$a -eq $b : a is equal to b"
else
echo "$a -eq $b: a is not equal to b"
fi

echo "Relational operator equal using double brackets"


if(($a==$b))
then
echo a is equal to b.
else
echo a is not equal to b.
fi

if(( $a!=$b ))
then
echo a is not equal to b.
else
echo a is equal to b.
fi

if(( $a<$b ))
then
echo a is less than b.
else
echo a is not less than b.
fi

34
if(( $a<=$b ))
then
echo a is less than or equal to b.
else
echo a is not less than or equal to b.
fi

if(( $a>$b ))
then
echo a is greater than b.
else
echo a is not greater than b.
fi

if(( $a>=$b ))
then
echo a is greater than or equal to b.
else
echo a is not greater than or equal to b.
fi

Output:

dagac@dagac-VirtualBox:~/shell$ ./relat.sh
Enter a : 15
Enter b : 4
Relational operator equal using square brackets
15 -eq 4: a is not equal to b
Relational operator equal using double brackets
a is not equal to b.
a is not equal to b.
a is not less than b.
a is not less than or equal to b.
a is greater than b.
a is greater than or equal to b.

c) Boolean Operators (or) Logical Operators:

The logical operators allow bash script to make a decision based on multiple conditions. Each operand
is considered a condition that can be evaluated to a true or false value. Then the value of the conditions
is used to determine the overall result of the expression. It is evaluated to true or false. There are three
logical operators available in bash.

Operator Description
Logical AND This is a binary operator, which returns true if both the operands are true otherwise
-a or && returns false.
Logical OR This is a binary operator, which returns true is either of the operand is true or both
-o or || the operands are true and return false if none of then is false.
Not Equal to This is a unary operator which returns true if the operand is false and returns false if
! the operand is true.

35
Example 1 (using double brackets) :

#1/bin/bash
read -p 'Enter a : ' a
read -p 'Enter b : ' b

if(($a >= 10 && $b <= 50))


then
echo Both expressions are true.
else
echo Atleast one of the expression is false.
fi

if(($a >= 10 || $b <= 50))


then
echo Atleast one of the expression is true.
else
echo None of the expressions are true.
fi

if((! $a == 5))
then
echo "a" is not equal to 5.
else
echo "a" is equal to 5.
fi

Output:

dagac@dagac-VirtualBox:~/shell$ ./logic.sh
Enter a : 5
Enter b : 10
Atleast one of the expression is false.
Atleast one of the expression is true.
a is equal to 5.

Example 2 (using square brackets):

logic1.sh

#1/bin/bash
read -p 'Enter a : ' a
read -p 'Enter b : ' b

if [ $a -ge 10 -a $b -le 50 ]
then
echo Both expressions are true.
else
echo Atleast one of the expression is false.
fi

if [ $a -ge 10 -o $b -le 50 ]

36
then
echo Atleast one of the expression is true.
else
echo None of the expressions are true.
fi

if [ ! $a -eq 5 ]
then
echo "a" is not equal to 5.
else
echo "a" is equal to 5.
fi

Output:

dagac@dagac-VirtualBox:~/shell$ ./logic1.sh
Enter a : 5
Enter b : 10
Atleast one of the expression is false.
Atleast one of the expression is true.
a is equal to 5.

d) String Operators:

The following string operators are supported by Shell. String operators perform operation on strings.
String is a group of characters.

Operator Description
= Checks if the value of two operands is equal or not; if yes, then the condition
becomes true.
!= Checks if the value of two operands is equal or not; if values are not equal then the
condition becomes true.
-z Checks if the given string operand size is zero; if it is zero length, then it returns
true.
-n Checks if the given string operand size is non-zero; if it is nonzero length, then it
returns true.
str Checks if str is not the empty string; if it is empty, then it returns false. Ex. [
$name ]

Example

#!/bin/bash

name1="Giri"
name2="Vasan"
name3=""

if [ $name1 = $name2 ]
then
echo "$name1 = $name2 : name1 is equal to name2"
else
echo "$name1 = $name2 : name1 is not equal to name2"
fi

37
if [ $name1 != $name2 ]
then
echo "$name1 != $name2 : name1 is not equal to name2"
else
echo "$name1 = $name2 : name1 is equal to name2"
fi

if [ -z $name1 ]
then
echo "-z $name1 : string length is zero"
else
echo "-z $name1 : string length is not zero"
fi

if [ -z $name3 ]
then
echo "-z $name3 : string length is zero"
else
echo "-z $name3 : string length is not zero"
fi

if [ -n $name1 ]
then
echo "-n $name1 : string length is not zero"
else
echo "-n $name1 : string length is zero"
fi

if [ $name1 ]
then
echo "$name1 : string is not empty"
else
echo "$name1 : string is empty"
fi

Output:

dagac@dagac-VirtualBox:~/shell$ ./str.sh
Giri = Vasan : name1 is not equal to name2
Giri != Vasan : name1 is not equal to name2
-z Giri : string length is not zero
-z : string length is zero
-n Giri : string length is not zero
Giri : string is not empty

e) File Test Operators

Bash have a few operators that can be used to test various properties associated with a Linux file.

Operator Description
-b file Checks if file is a block special file; if yes, then the condition becomes true. A block
special file acts as a direct interface to a block device. A block device is any device
which performs data I/O in units of blocks. Example:
/dev/sdxn x denote the device and n refers partition number.

38
-c file Checks if file is a character special file; if yes, then the condition becomes true. A
character special file is similar to a block device, but data is written one character
(eight bits, or one byte) at a time. Examples of character special files:
/dev/stdin (Standard input.)
/dev/stdout (Standard output.)
-d file Checks if file is a directory; if yes, then the condition becomes true.
-f file Checks if file is an ordinary file as opposed to a directory or special file; if yes, then
the condition becomes true.
-g file Checks if file has its set group ID (SGID) bit set; if yes, then the condition becomes
true. When the SGID bit is set on an executable file, the process runs with the
permissions of the members of the file's group, rather than the permissions of the
person who launched it.
-k file Checks if file has its sticky bit set; if yes, then the condition becomes true. A Sticky
bit is a permission bit that is set on a file or a directory that lets only the owner of
the file/directory or the root user to delete or rename the file/directory. $chmod
1777 dir or $chmod +t dir
-r file Checks if file is readable; if yes, then the condition becomes true.
-w file Checks if file is writable; if yes, then the condition becomes true.
-x file Checks if file is executable; if yes, then the condition becomes true.
-s file Checks if file has size greater than 0; if yes, then condition becomes true.
-e file Checks if file exists; is true even if file is a directory but exists.

Example:

#!/bin/bash
read -p "Enter the filename/directoryname: " fname
echo "File/Directory $fname attributes:"
if [ -c $fname ]
then
echo "It is a character special file"
else
echo "It is not a character special file"
fi
if [ -b $fname ]
then
echo "It is a block special file"
else
echo "It is not a block special file"
fi
if [ -d $fname ]
then
echo "It is a directory"
else
echo "It is not a directory"
fi
if [ -f $fname ]
then
echo "It is a file"
else
echo "It is not a file"
fi
if [ -g $fname ]
then

39
echo "It's SGID is set"
else
echo "It's SGID is not set"
fi
if [ -k $fname ]
then
echo "It's sticky bit is set"
else
echo "It's sticky bit is not set"
fi
if [ -r $fname ]
then
echo "It has read permission"
else
echo "It does not have read permission"
fi
if [ -w $fname ]
then
echo "It has write permission"
else
echo "It does not have write permission"
fi
if [ -x $fname ]
then
echo "It has execute permission"
else
echo "It does not have execute permission"
fi
if [ -w $fname ]
then
echo "It's size is greater than zero"
else
echo "It's size is zero"
fi
if [ -e $fname ]
then
echo "It is a file or directory and exists"
else
echo "It does not exists"
fi

Output:

dagac@dagac-VirtualBox:~/shell$ ./file.sh
Enter the filename/directoryname: arith.sh
File/Directory arith.sh attributes:
It is not a character special file
It is not a block special file
It is not a directory
It is a file
It's SGID is not set
It's sticky bit is not set
It has read permission
It has write permission
It has execute permission

40
It's size is greater than zero
It is a file or directory and exists
dagac@dagac-VirtualBox:~/shell$ mkdir test
dagac@dagac-VirtualBox:~/shell$ chmod 755 test
dagac@dagac-VirtualBox:~/shell$ ./file.sh
Enter the filename/directoryname: test
File/Directory test attributes:
It is not a character special file
It is not a block special file
It is a directory
It is not a file
It's SGID is not set
It's sticky bit is not set
It has read permission
It has write permission
It has execute permission
It's size is greater than zero
It is a file or directory and exists

f) Bitwise Operators:

A bitwise operator is an operator used to perform bitwise operations on bit patterns. They are of 6 types:
Operator Description
Bitwise And Bitwise & operator performs binary AND operation bit by bit on the operands.
(&)
Bitwise OR Bitwise | operator performs binary OR operation bit by bit on the operands.
(|)
Bitwise XOR Bitwise ^ operator performs binary XOR operation bit by bit on the operands.
(^)
Bitwise Bitwise ~ operator performs binary NOT operation bit by bit on the operand.
complement
(~)
Left Shift This operator shifts the bits of the left operand to left by number of times specified
(<<) by right operand.
Right Shift This operator shifts the bits of the left operand to right by number of times specified
(>>) by right operand.

Example: bitwise.sh

#!/bin/bash

read -p 'Enter a (number): ' a


read -p 'Enter b (number): ' b

bitwiseAND=$(( a&b ))
echo Bitwise AND of a and b is $bitwiseAND

bitwiseOR=$(( a|b ))
echo Bitwise OR of a and b is $bitwiseOR

bitwiseXOR=$(( a^b ))
echo Bitwise XOR of a and b is $bitwiseXOR

41
bitiwiseComplement=$(( ~a ))
echo Bitwise Compliment of a is $bitiwiseComplement

leftshift=$(( a<<1 ))
echo Left Shift of a is $leftshift

rightshift=$(( b>>1 ))
echo Right Shift of b is $rightshift

Output:

dagac@dagac-VirtualBox:~/shell$ ./bitwise.sh
Enter a (number): 4
Enter b (number): 8
Bitwise AND of a and b is 0
Bitwise OR of a and b is 12
Bitwise XOR of a and b is 12
Bitwise Compliment of a is -5
Left Shift of a is 8
Right Shift of b is 4

Order of operator precedence

Operators are evaluated in order of precedence. The levels are listed in order of decreasing precedence (quoting
form the bash man page).

id++ id-- variable post-increment and post-decrement


++id –id variable pre-increment and pre-decrement
-+ unary minus and plus
!~ logical and bitwise negation
** exponentiation
*/% multiplication, division, remainder
+- addition, subtraction
<< >> left and right bitwise shifts
<= >= < > comparison
== != equality and inequality
& bitwise AND
^ bitwise exclusive OR
| bitwise OR
&& logical AND
|| logical OR
expr?expr:expr conditional operator
= *= /= %= += -= <<= >>= &= ^= |= assignment
expr1 , expr2 comma

42
UNIT - III
Decision Making: if statement, if else statement, elif ladder and case statement- Looping: while loop,
for loop and until loop – break and continue statements – Meta characters -Substitution in expression
and command substitution - Input and Output redirection.

Decision Making: Decision making is about deciding the order of execution of statements based on
certain conditions. Decision making statements contain conditions that are evaluated by the program. If
the condition is true, then a set of statements are executed and if the condition is false then another set
of statements is executed.

If statement:

This If statement block will process if specified condition is true and execute the statements inside the
block. If specified condition is not true it will exit.

Syntax:
if [ expression ]
then
statement block
fi
Example :- Compare 2 numbers using if statement.

#!/bin/bash
echo "Enter 2 numbers for comparison"
read num1
read num2
if [ $num1 == $num2 ]
then
echo "Provided numbers are equal"
fi

43
If-else statement:
If specified condition is not true in if part then else part will be executed.

Syntax:

if [ expression ]
then
statement block 1
else
statement block 2
fi

Example :- Compare 2 numbers.

#!/bin/bash
echo "Please provide 2 numbers for comparison"
read num1
read num2
if [ $num1 == $num2 ]
then
echo "Provided numbers are equal"
else
echo "Provided number are not equal"
fi

44
The elif else statement (elif ladder):

To use multiple conditions in one if-else block, then elif keyword is used in shell. If expression1 is true
then it executes statement block 1, and this process continues. If none of the condition is true then it
processes else part.

if [ expression1 ]
then
statement block 1
elif [ expression2 ]
then
statement block 2
---
---
else
statement block n
fi

Example :- Here we will use 3 conditions comparison, greater than, lesser than.

#!/bin/bash
echo "Please provide 2 numbers"
read num1
read num2
if [ $num1 -eq $num2 ]
then
echo "Provided numbers are equal"
elif [ $num1 -gt $num2 ]
then
echo "Provided number1 is greater than num2"
elif [ $num1 -lt $num2 ]
then
echo "Num1 is less than num2"
else
echo "None of the conditions matched"
fi

45
Case statement:

The case statement is good alternative to multilevel if-then-else-fi statement. It enables to match
several values against one variable.

Syntax
The syntax is as follows:

case $variable-name in
pattern1)
command1
...
....
commandN
;;
pattern2)
command1
...
....
commandN

46
;;
patternN)
command1
...
....
commandN
;;
*)
Default statement(s)
esac

OR

case $variable-name in
pattern1|pattern2|pattern3)
command1
...
....
commandN
;;
pattern4|pattern5|pattern6)
command1
...
....
commandN
;;
pattern7|pattern8|patternN)
command1
...
....
commandN
;;
*)
Default statement(s)
esac

The case statement allows to check pattern (conditions) and then process a command if that condition
evaluates to true. The $variable-name is compared against the patterns until a match is found. *) acts as
default and it is executed if no match is found. The pattern can include wildcards. The ;; at the end of
each commandN is mandatory. The shell executes all the statements up to the two semicolons that are
next to each other. The esac is always required to indicate end of case statement.

47
Example 1: case without multiple patterns in single label.

#!/bin/bash

read -p "Enter the vehicle you want to rent: " vehicle

case $vehicle in
"car") echo "For $vehicle rental is Rs. 200 per hour";;
"van") echo "For $vehicle rental is Rs. 300 per hour.";;
"jeep") echo "For $vehicle rental is Rs. 250 per hour.";;
"bicycle") echo "For $vehicle rental Rs. 20 per hour.";;
"scooter") echo "For $vehicle rental Rs. 100 per hour.";;
"bike") echo "For $vehicle rental Rs. 75 per hour.";;
*) echo "Sorry, the specified vehicle $vehicle is not available.";;
esac

Example 2: case with multiple patterns in single label.

#!/bin/bash
NOW=$(date +"%a")
echo $NOW

case $NOW in
Sun)
echo "Holiday";;
Mon|Tue|Wed|Thu|Fri)
echo "Working Day";;
Sat)
echo "Special Working Day";;
*) ;;
Esac

48
Example 3: Using expressions:
#!/bin/bash
num1=10
num2=20

case $((num1 + $num2)) in


10)
echo "You have entered ten";;
20)
echo "You have entered twenty";;
30)
echo "You have entered thirty";;
*) echo "Invalid input";;
esac

Looping statements:

Loop: A loop is a sequence of instructions that is continually repeated/executed until a certain condition
is reached.

a) while loop:

The while loop is used to execute a given set of commands repeatedly as long as the given condition
evaluates to true.

The Bash while loop takes the following form:

while [CONDITION]
do
[COMMANDS]
done

The while statement starts with the while keyword, followed by the conditional expression. The
condition is evaluated before executing the commands. If the condition evaluates to true, commands are
executed. Otherwise, if the condition evaluates to false, the loop is terminated, and the program control
will be passed to the command that follows the while loop construct.

Example 1: Print numbers from 1 to 10 and print their average.

#!/bin/bash
i=1
sum=0
while [ $i -le 10 ]
do
echo $i
sum=`expr $sum + $i`
i=`expr $i + 1`
# Alternative way i=$(($i+1))
# One more alternative way ((i++))
done
avg=`expr $sum / 10`
echo "Sum of first 10 numbers is : $sum"
echo "Average of first 10 numbers is : $avg"

49
echo "scale=2; $sum/10" | bc

Output:

1
2
3
4
5
6
7
8
9
10
Sum of first 10 numbers is : 55
Average of first 10 numbers is : 5
5.50

Infinite while Loop:

An infinite loop is a loop that repeats indefinitely and never terminates. If the condition always evaluates
to true, you get an infinite loop. In the following example, the built-in command : is used to create an
infinite loop. : always returns true. It is possible to use the true built-in or any other statement that
always returns true.

while :
do
echo "Press <CTRL+C> to exit."
sleep 1
done

The while loop above will run indefinitely. We can terminate the loop by pressing CTRL+C.

Nesting Loops:

All the loops support nesting concept which means it is possible to put one loop inside another similar
one or different loops. This nesting can go up to unlimited number of times based on the requirement.

Nesting while Loops:

It is possible to use a while loop as part of the body of another while loop.

Syntax
while [condition1] # this is loop1, the outer loop
do
Statement(s) to be executed if condition1 is true

while {condition2] # this is loop2, the inner loop


do
Statement(s) to be executed if condition2 is true
done
Statement(s) to be executed if command1 is true
done

50
Example: To print multiplication table using while loop

#!/bin/bash
read -p "Enter Number of Tables: " no
read -p "Enter Range of Tables: " range
i=1
while [ $i -le $range ]
do
j=1
while [ $j -le $no ]
do
echo -n "$j X $i = $(($i * $j)) "
j=`expr $j + 1`
done
i=`expr $i + 1`
echo
done

Output:
dagac@dagac-VirtualBox:~/shell$ ./tables.sh
Enter Number of Tables: 3
Enter Range of Tables: 4
1X1=1 2X1=2 3X1=3
1X2=2 2X2=4 3X2=6
1X3=3 2X3=6 3X3=9
1 X 4 = 4 2 X 4 = 8 3 X 4 = 12

Read a File Line By Line


One of the most common usages of the while loop is to read a file, data stream, or variable line by line.

Here is an example that reads the /etc/passwd file line by line and prints each line:

file=/etc/passwd

while read -r line; do


echo $line
done < "$file"
Instead of controlling the while loop with a condition, we are using input redirection (< "$file") to pass
a file to the read command, which controls the loop. The while loop will run until the last line is read.

51
b) for loop

The for loop iterates over a list of items and performs the given set of commands. The for loop is used
to repeat a section of code known number of times. Sometimes it is the computer that knows how many
times, not you, but it is still known. The advantage to a for loop is we know exactly how many times
the loop will execute before the loop starts.

1) Simple For loop:

The simple for loop will iterate over the specified elements after the in keyword one by one. The
elements can be numbers, strings, or other forms of data.

Syntax:

for element in [LIST]


do
[COMMANDS]
done

The list can be a series of strings separated by spaces, a range of numbers, output of a command, an
array, and so on.

Example:

#!/bin/bash
for n in 1 2 3
do
echo $n
done

for m in one two three


do
echo $m
done

output:
1
2
3
one
two
three

2) Range-based for loop:

It is possible to use the sequence expression to specify a range of numbers or characters by defining a
start and the end point of the range. In the range based for loop, we use the “{}” to specify a range of
numbers. Inside the curly braces, we specify the start point followed by two dots and an endpoint. By
default, it increments by one.

Syntax:

for element in {START..END}


do

52
[COMMANDS]
done

Example:

#!/bin/bash
for n in {1..5}
do
echo $n
done
for n in {a..e}
do
echo $n
done

Output:

1
2
3
4
5
a
b
c
d
e

Starting from Bash 4, it is also possible to specify an increment when using ranges. The expression
takes the following form:

for element in {START..END..INCREMENT}


do
[COMMANDS]
done

Example showing how to increment by 5 and decrement by 5:

#!/bin/bash
for i in {0..20..5}
do
echo "Number: $i"
done

for i in {20..0..5}
do
echo "Number: $i"
done

Output:

Number: 0
Number: 5
Number: 10
Number: 15

53
Number: 20
Number: 20
Number: 15
Number: 10
Number: 5
Number: 0

3) Array iteration for loops:

It is possible to use the for loop to iterate over an array of elements. In the example below, an array
named names is defined and iterating over each element of the array.

#!/bin/bash
names=('raman' 'giri' 'kanna' 'ravi')

for name in "${names[@]}"


do
echo "Name: $name"
done

Output:

Name: raman
Name: giri
Name: kanna
Name: ravi

4) C-Styled for loops:

It is possible to use the ‘C’ styled for loop in bash script.

Syntax:

for ((INITIALIZATION; TEST; STEP))


do
[COMMANDS]
done

The INITIALIZATION part is executed only once when the loop starts. Then, the TEST part is
evaluated. If it is false, the loop is terminated. If the TEST is true, commands inside the body of the for
loop are executed, and the STEP part is updated.

Example: Print odd numbers from 1 to 10

#!/bin/bash
for ((i = 1 ; i <= 10 ; i+=2))
do
echo "Value: $i"
done

Output:
#!/bin/bash
for ((i = 1 ; i <= 10 ; i+=2))
do

54
echo "Value: $i"
done

5) Infinite for loop

Infinite loops are executed infinitely. It is necessary to exit from the loop using any condition and break
statement.

Syntax:

for (( ; ; ))
do
[COMMANDS]
done

#!/bin/bash
for (( ; ; ))
do
echo "Iteration Press Ctrl + C to exit!"
sleep 2
done

c) until loop:

Until loop is used to execute a block of code until the expression is evaluated to be false. This is exactly
the opposite of a while loop. While loop runs the code block while the expression is true and until loop
does the opposite.

Syntax:

until [ expression ]
do
code block
...
...
done

To start the loop it is necessary to use until keyword followed by an expression within single or double
braces. The expression should be evaluated as false until to start running the code block. The actual
block of code is placed between do and done.

Example:

#!/bin/bash
i=10
until [ $i == 1 ]
do
echo "$i is not equal to 1"
i=$((i-1))
done
echo "Until loop terminated"
echo "i value after loop $i"

55
Output:

10 is not equal to 1
9 is not equal to 1
8 is not equal to 1
7 is not equal to 1
6 is not equal to 1
5 is not equal to 1
4 is not equal to 1
3 is not equal to 1
2 is not equal to 1
Until loop terminated
i value after loop 1
Create an Infinite Loop using until:

We can create an infinite loop using a false statement as an expression. When we try to simulate infinite
loops try to use sleep which will pass the script periodically.

#!/bin/bash
count=0
until false
do
echo "Counter = $count"
((count++))
sleep 2
done

Output:

Create Single Line Statements:

we can create single-line loop statements. In single line looping statement, we have to use a semicolon
(;) to terminate each statement.

# until false; do echo "Counter = $count"; ((count++)); sleep 2; done


Single Line Statement

d) break and continue Statements

The break and continue statements can be used to control the loop execution.

break Statement:

The break statement terminates the current loop and passes program control to the command that
follows the terminated loop. It is usually used to terminate the loop when a certain condition is met.

In the following example, the execution of the loop will be interrupted once the current iterated item is
equal to 2.

#!/bin/bash
i=0
while [ $i -lt 5 ]
do
echo "Number: $i"

56
((i++))
if [[ "$i" == '2' ]]; then
break
fi
done

Output:

Number: 0
Number: 1
continue Statement:

The continue statement exits the current iteration of a loop and passes program control to the next
iteration of the loop. In the following example, once the current iterated item is equal to 2 the continue
statement will cause execution to return to the beginning of the loop and to continue with the next
iteration.

#!/bin/bash
i=0
while [ $i -lt 5 ]
do
((i++))
if [[ "$i" == '2' ]]; then
continue
fi
echo "Number: $i"
done

Output:

Number: 1
Number: 3
Number: 4
Number: 5

Note: The break and continue statements can be used in any looping constructs.

57
Meta Characters

A metacharacter is any character that has a special meaning, such as a carat (^), dollar sign ($), or an
asterisk (*). Linux has a fair number of these metacharacters, and their meanings differ depending on
which Linux command or program we use. Metacharacters are interpreted by the Linux and not passed
to the commands. They are also known as shell wildcards. The metacharacters can be used in
commands and in expressions. Shell metacharacters can be used to group commands together, to
abbreviate filenames and pathnames, to redirect and pipe input/output, to place commands in the
background, and so forth.

Metacharacters Meaning
> Output redirection, (see File Redirection)
>> Output redirection (append)
< Input redirection
* File substitution wildcard; zero or more characters
? File substitution wildcard; one character
[] File substitution wildcard; any character between brackets
`cmd` Command Substitution
$(cmd) Command Substitution
| The Pipe (|)
; Command sequence, Sequences of Commands
|| OR conditional execution
&& AND conditional execution
() Group commands, Sequences of Commands
& Run command in the background, Background Processes
# Comment
$ Expand the value of a variable
\ Prevent or escape interpretation of the next character
<< Input redirection

Command substitution:

$(<command>) is literally replaced by the output from command (the Bourne shell syntax for
this uses grave quotes: `<command>`)

Examples:

$ echo "The date and time is $(date)"


The date and time is Sat May 7 20:10:28 IST 2022

$ echo "There are $(who | wc -l) users logged onto $(hostname)"


There are 2 users logged onto dagac-VirtualBox

$ echo "You have $(ls | wc -l) files in $(pwd)"


You have 54 files in /home/dagac/shell

$ echo `ls`
50 arith1.sh arith2.sh arithlogic.sh arith.sh array.sh asso.sh a.txt bitwise.sh b.txt casestmt1.sh
casestmt.sh cmd.sh

58
Expression Substitution:

a) $ -> Variable substitution


The $ metacharacters is used to access value of a variable.

$name=dagac
$echo $name

Sets the variable name to dagac; displays the value stored there.

b) ! -> History substitution:

$!3

Re-executes the third event from the history list.

c) * -> Filename substitution:

$rm *
Removes all files.
$ls *.sh
Lists all the files ends with .sh.

d) ? -> Filename substitution:

$ ls ??
Lists all two-character file names.
$ ls ???.sh
Lists all files with extension .sh and three-character file names.

e) [ ] -> Filename substitution:

ls c[am]*.sh
casestmt1.sh casestmt.sh cmd.sh

f) ; -> Command separator:

ls;date;pwd

Each command is executed in turn.

g) & -> Background processing:

./prg1.sh &

Prg1.sh execution is done in the background.

h) > ->Redirection of output:

ls > file.txt

59
Redirects standard output to file.txt.

i) < ->Redirection of input:

wc < file.txt

Redirects standard input from file.txt.

j) ( ) -> Groups commands to be executed in a subshell:

(ls ; pwd) >tmp.txt

Executes commands and sends output to tmp.txt file.

Sometimes we need to pass metacharacters to the command being run and do not want the shell to
interpret them. There are three options to avoid shell interpretation of metacharacters.
• Escape the metacharacter with a backslash (\). Escaping characters can be inconvenient to use
when the command line contains several metacharacters that need to be escaped.
• Use single quotes (' ') around a string. Single quotes protect all characters except the backslash
(\).
• Use double quotes (" "). Double quotes protect all characters except the backslash (\), dollar
sign ($) and grave accent (inverted comma) (`).

$ echo The amount is \$1000 # The amount is $1000


$ echo 'The amount is $1000' # The amount is $1000
$ echo "The amount is $1000" #The amount is 000

60
Input and Output redirection

The objective of any Linux command is either taking input or gives an output or both. So, Linux has
some command or special character to redirect these input and output functionalities. For example:
suppose we want to run a command called “ls” if we run it will print the output to the current terminal
screen. But if we want the output to be to be saved in a file, this could be done very easily with output
redirection. Redirection here simply means diverting the output or input. Similarly, if we have a
command that needs input to be performed. Let take a command “head” this needs input to give output.
So, either we write input in the form of command directly or redirect input from any other place or file.
Suppose we have a file called “file.txt” to print the starting some lines of the file we could use the
“head”. It is possible to redirect input and output of a command. For redirection, meta characters are
used. Redirection can be into a file (shell meta characters are angle brackets '<', '>') or a program (shell
meta characters are pipe symbol '|').

The bash shell has three standard streams in I/O redirection:

• standard input (stdin): The stdin stream is numbered as stdin (0). The bash shell takes input
from stdin. By default, keyboard is used as input.
• standard output (stdout): The stdout stream is numbered as stdout (1). The bash shell sends
output to stdout. Output goes to display.
• standard error (stderr): The stderr stream is numbered as stderr (2). The bash shell sends error
message to stderr. Error message goes to display.

Redirection Into a File:

Each stream uses redirection commands. Single bracket '>' or double bracket '>>' can be used to redirect
standard output. If the target file doesn't exist, a new file with the same name will be created.

Overwrite:

Commands with a single bracket '>' overwrite existing file content.

> : standard output


< : standard input
2> : standard error

Note: Writing '1>' or '>' and '0<' or '<' is same thing. But for stderr you have to write '2>'.

Syntax:

cmd > fileName

Example:

ls -l > sample.txt

Append:

Commands with a double bracket '>>' do not overwrite the existing file content.

>> - standard output


<< - standard input
2>> - standard error

61
Syntax:

cmd >> fileName


Example:

ls -l >> sample.txt

Redirection Into a Program:

Pipe redirects a stream from one program to another. When pipe is used to send standard output of one
program to another program, first program's data will not be displayed on the terminal, only the second
program's data will be displayed. Although the functionality of pipe may look similar to that of '>' and
'>>' but has a significance difference. Pipe redirects data from one program to another while brackets
are only used in redirection of files.

Example:

ls *.sh | cat > shellfiles.txt

Input Redirection:

We can also pass inputs to a command using input redirection operator (<).

< stdin

The bash shell uses stdin to take input. In input redirection, a file is made input to the command and
this redirection is done with the help of '<' sign.

Syntax:
cat < fileName
Example:
cat < file.txt

The above command “cat < file.txt" has taken 'file.txt' as input and displayed its content.

Redirecting the contents of file.txt file to the word count to find the number of lines.

$ wc -l < ostechnix.txt

We can also pass stdin file descriptor (0) when redirecting the input.

$ wc -l 0< file.txt

Output Redirection

Output redirection is used to put output of one command into a file or into another command.

> stdout
The stdout is redirected with a '>' greater than sign. When shell meets the '>' sign, it will clear the file
(as you already know).

Example:

echo Hello everyone. > afile.txt

62
In the above command, greater than sign '>' redirects the command 'echo' output into a file 'afile.txt'.

Output File Is Erased: In output redirection, during scanning of a command line, shell will encounter
through '>' sign and will clear the file.

Example:

zcho Welcome > afile.txt

In the above example, command "zcho Welcome > afile.txt" is wrong but still file 'afile.txt' is cleared.

Noclobber:

We can prevent file deletion while using '>' sign with the help of noclobber option.

Syntax:

set -o noclobber (To prevent overwrite)


set +o noclobber (To overwrite)

Example:

echo Learn Linux. > newfile.txt

In the above example, command "set -o noclobber" prevents file from getting overwrite.

But command "set +o noclobber" allows you to overwrite the existing file.

>>append
Append '>>' sign doesn't let the file content to be overwritten and hence, displays new as well as old
file content.

Syntax:

command >> <fileName>

Example:

echo You all are welcome here. >> newfile.txt

Look at the above snapshot, file 'newfile.txt' is not overwritten with append command. New content is
displyed with the old one.

Linux Error Redirection

2> stderr
Command '2>' redirects the error of an output. It helps us to keep our display less messy by redirecting
error messages.

Example:

zcho hyii 2> /dev/null

63
In the above command "zcho hyii 2> /dev/null" (here echo command is wrong), we didn't get any error
message. But when we use command "zcho hyii" error message is displayed in the terminal. Hence, '2>'
redirects the error message in the mentioned directory keeping your terminal error message free.

2>&1
This command helps in redirecting the stdout and stderr in the same file.

Example:

newfile.txt > abc.txt and error.txt 2>&1


Look at the above snapshot, 'abc.txt and error.txt' is directing to the same file 'newfile.txt'.

Here document:
A here-document is used to redirect input into an interactive shell script or program. We can run any
program within a shell script without user action by supplying the required input for the interactive
program, or interactive shell script.

The general form for a here document is −

Syntax:
command << delimiter
document
delimiter

$cat << EOF


> Hello This is
> dagac
EOF
Hello This is
dagac

Note: Here, EOF is a delimiter.

Difference between pile and redirect:

• Pipe is used to pass output to another program or utility.


• Redirect is used to pass output to either a file or stream.

Example: prg1 > prg2 vs prg1 | prg2

prg1 > prg2

Our shell will run the program named prg1. Everything that prg1 outputs will be placed in a file called
prg2. (Note - if prg2 exists, it will be overwritten). If we want to pass the output from program prg1 to
a program called prg2, we could do the following:

prg1 > temp_file && prg2 < temp_file

However, that's clunky, so they made pipes as a simpler way to do that.

prg1 | prg2 does the same thing as prg1 > temp_file && prg2 < temp_file.

64
UNIT - IV

Arrays - User-defined functions – Command line arguments – String processing – Process basics –
Commands related with processes – Filter commands.

Arrays

Bash Array – An array is a collection of elements. Unlike in many other programming languages, in
bash, an array is not a collection of similar elements. Since bash does not discriminate string from a
number, an array can contain a mix of strings and numbers.

There are two types of arrays available in bash:


1. Indexed arrays
2. Associative arrays

1) Indexed arrays:

Declaration of Array:

a) Explicit declaration:

The declare keyword is used to declare a variable as a Bash Array.

Syntax:

declare -a arrayname

Example:

declare -a fruits

In the above example, a fruits array is declared. Array only created and it does not contain any elements.

b) Indirect declaration:

In Indirect declaration, value is assigned in a particular index of Array Variable. No need to first declare.

ARRAYNAME[index]=value

Example: fruits[0]=”banana”

Initialization of indexed Array:

Compound assignment:

In Compound assignment, array is assigned with a bunch of values. It is possible to add other values
later too. Here, assignment operator = is used and all the elements are enclosed inside braces ().

Syntax:

ARRAYNAME=(value1 value2 .... valueN)


or
[indexnumber=]string

Example:

65
fruits=("apple" "banana" "grape")
numbers=([1]=10 [2]=20 [3]=30)

In the above examples, fruits array is initialized with three elements and numbers array is initialized
with three elements.

Note:
read -a fruits
Read the entire array from keyboard. Elements should be separated by single space.

Access elements of Array:

We can access elements of a Bash Array using the index.

${ARRAY_NAME[index]}

Example:

In the following script, elements of array fruits are accessed with indices.

fruits=("apple" "banana" "grape")


#To print the first element we can use index 0 or simply specify the array name
echo ${fruits[0]}
echo ${fruits}
#To print the second element use index 1
echo ${fruits[1]}

Note: Array index starts with 0 in bash.

We can print all the elements of a bash array using the following options.

echo ${arr[@]}
echo ${arr[*]}
echo ${arr[@]:0}
echo ${arr[*]:0}

Example:

fruits=("apple" "banana" "grape" "pineapple" "mango")


#Print all elements
echo ${fruits[@]}
#Print elements starts from index 1
echo ${fruits[@]:1}
#Print elements starts from index 1 and print two elements from there. The 2 denotes size and not the
index.
echo ${fruits[@]:1:2}

Output:

apple banana grape pineapple mango


banana grape pineapple mango
banana grape

66
Length of an Array:

To get the length of an array in Bash, the # is used.

${#arrayname[@]}

Example:

len=${#fruits[@]}
echo "Length of Array : $len"

Output:
Length of Array : 3

To get the length of a particular element in Array, #(hash) is used with element.

echo ${#fruits[0]}
echo ${#fr}

Output:

5
5

Print Array with Indices and Details:

To print all the elements of a bash array with all the index and details, option p is used.

declare -p arrayname
Example

fruits=("apple" "banana" "grape")


declare -p fruits

Output:

declare -a fruits='([0]="apple" [1]="banana" [2]="grape")'

Array and For Loop:

It is possible to loop through elements of array using for loop, as shown in the following example.

Example:

fruits=("apple" "banana" "grape")


for element in "${fruits[@]}";
do
echo $element
done

Output:
apple
banana
grape

67
Array and While Loop:
While loop is also can be used to loop thru array elements. It is necessary to find the length and use the
length as condition in while loop to iterate thru the elements.

Example:

fruits=( "apple" "banana" "grape" )


i=0
len=${#fruits[@]}
while [ $i -lt $len ];
do
echo ${fruits[$i]}
((i++))
done

Loop through Indices of Array:

To loop through indices of array in Bash, the expression ${!arr[@]} is used to get the indices and the
For loop is used to iterate over these indices.

Example:

fruits=("apple" "banana" "grape")

for index in "${!fruits[@]}";


do
echo "$index -> ${fruits[$index]}"
done

Output:

0 -> apple
1 -> banana
2 -> grape
Exercise: Write a shell script to find the given element is available in the given array

Modification of array:

Update Element of an Array:

To update element of an array in Bash, access the element using array variable and index, and assign a
new value to this element using assignment operator.

Syntax:

arrayname[index]=new_value

Example:

fruits=("apple" "banana" "grape")


fruits[2]="mango"
echo ${arr[@]}

68
append element(s) to an array:

The += operator is used to add elements to an existing array. This operator takes array as left operand
and the element(s) as right operand. The element(s) must be enclosed in parenthesis.
Syntax

arrayname+=(element)
(or)
arrayname+=(element1 element2 elementN)
(or)
Array_name[index]="new element"

Example

fruits=("apple" "banana" "grape")


fruits+=("mango")
fruits[4]="another apple"
echo ${fruits[@]}

Append Array to Another Array:

To append an array to another array in Bash, use the following syntax.

array1+=(${array2[@]})

Example:

fruits1=("apple" "banana" "grape")


fruits2=("mango" "pineapple")
fruits1+=(${fruits2[@]})
echo ${fruits1[@]}

Slice an Array:

To slice an array in Bash, from specific starting upto specific ending index, use the following syntax.

${arrayname[@]:start:end}

Example:

fruits=("apple" "banana" "grape" "mango" "pineapple")


sliced=${fruits[@]:1:3}
echo $sliced

Find and Replace:

echo ${fruits[@]//a/A}
echo ${fruits[@]}
echo ${fruits[0]//p/R}

Note: The above example will not permanently replace the values.

To delete array element:

To delete index-1 element

69
unset ARRAYNAME[1]

To delete the whole Array


unset ARRAYNAME

2) Associative Arrays:

Associative arrays work based on key-value pairs. In some languages, it is also


called dictionaries or hash maps. The main difference between Indexed and Associative arrays is,
Indexed arrays works based on numeric index value, and each element in the array is mapped to a
particular index position of the array. An associative array uses a "key" to map the value instead of
index positions.

Initialize associative array:

Unlike an Indexed array, it is not possible to initialize an associative array without using declare
command.

declare -A iplcaptains=()

Now an empty array named "STAR_PLAYERS" is created. It is possible to add elements to the array
directly during the initialization.

declare -A iplcaptains=( [Punjab]="Mayank Agarwal" [Lucknow]="K L Rahul"


[Mumbai]="Rohit Sharma" )

In the above code, the keys are within square brackets and values should be followed by equal to sign
without any spaces. There is no need to use a comma or semicolons as a separator between elements.

View array elements:

The echo command in bash is used to print the contents of the array. Similar to how we used the special
variable * and @ to print the Indexed array, the same should be used to print associative arrays too.

echo ${iplcaptains[@]}
echo ${iplcaptains[*]}

only the values are printed and not the keys.

Individual elements can be accessed using the key.

echo ${iplcaptains[Punjab]}

Add new elements to the array:

Adding a new element to the array is simple. All we have to do is create a new key-value pair like as
shown below.

iplcaptains[Delhi]="Rishabh Pant"
echo ${iplcaptains[@]}

Note: If we try to use the same key that is already present in the array, the value will be overridden with
the new one.

70
Compound assignment:

iplcaptains+=([Rajasthan]="Sanju Samson" [Hyderabad]="Kane Williamson")


echo ${iplcaptains[@]}

Print keys only:

In the previous examples, only the values are printed. We can get the list of keys alone by prefixing the
"!" symbol with the array.

echo ${!iplcaptains[@]}

If we wish to print both key and value at the same time, we can use the for loop.

for elem in "${!iplcaptains[@]}"


do
echo "key : ${elem}" -- "value: ${iplcaptains[${elem}]}"
done

Length of associative array:

We can get the length of the associative array i.e. total number of elements present in the array by
prefixing "#" symbol with the array.

echo ${#iplcaptains[@]}

Check if element is present in the array:

Sometimes before doing any processing with the particular element, we may wish to check if the
element is already present in the array. There are many ways to do this but below is the simplest way is
using the conditional statement with the -n flag which will check if the length of the string returned
from ${iplcaptains[Mumbai]} is non-zero. It expands the given key and the value is actually checked
against the -n flag.

if [[ -n "${iplcaptains[Mumbai]}" ]]
then
echo "Element is present"
else
echo "Element not present"
fi

Remove elements:

If we wish to remove any particular element from the array, we can use the unset command with the
element key name.

unset iplcaptains[Mumbai]

Empty an array:

We can also remove all the elements from the array and make it empty by reinitializing the array like
as shown below.

echo ${iplcaptains[@]}
declare -A iplcaptains=()

71
Remove array:

If we wish to remove the array, we can use the array name without any keys.

unset iplcaptains

User-defined functions

A Bash function is essentially a set of commands that can be called numerous times. The purpose of a
function is to help us make our bash scripts more readable and to avoid writing the same code
repeatedly. Compared to most programming languages, Bash functions are somewhat limited.

Defining Bash Functions:

Bash functions can be declared in two different formats:

The first format starts with the function name, followed by parentheses. This is the preferred and more
used format.

function_name ()
{
commands
}

Single line version:

function_name () { commands; }

The second format starts with the reserved word function, followed by the function name.

function function_name
{
commands
}

Single line version:

function function_name { commands; }

Points to remember:

• The commands between the curly braces ({}) are called the body of the function. The curly
braces must be separated from the body by spaces or newlines.
• Defining a function doesn’t execute it. To invoke a bash function, simply use the function name.
Commands between the curly braces are executed whenever the function is called in the shell
script.
• The function definition must be placed before any calls to the function.
• When using single line “compacted” functions, a semicolon ; must follow the last command in
the function.
• Always try to keep the function names descriptive.

Example: Print Hello World using function.

72
#!/bin/bash

say_hello () {
echo 'hello, world'
}
say_hello
Output:

Hello, World

Scope of variables:

Global variables: Global variables are variables that can be accessed from anywhere in the script
regardless of the scope. In Bash, all variables by default are defined as global, even if declared inside
the function.

Local variables: Local variables can be declared within the function body with the local keyword and
can be used only inside that function. You can have local variables with the same name in different
functions.

Example:

#!/bin/bash

var1=10
var2=20

test_scope()
{
local var1=30
var2=40
echo "Inside function: var1: $var1, var2: $var2"
}

echo "Before executing function: var1: $var1, var2: $var2"

test_scope

echo "After executing function: var1: $var1, var2: $var2"

Output:

Before executing function: var1: 10, var2: 20


Inside function: var1: 30, var2: 40
After executing function: var1: 10, var2: 40

The script starts by defining two global variables var1 and var2. Then there is a function that sets a local
variable var1 and modifies the global variable var2.

• When a local variable is set inside the function body with the same name as an existing global
variable, it will have precedence over the global variable.
• Global variables can be changed from within the function.

Return Values:

73
Unlike functions in “real” programming languages, Bash functions don’t allow us to return a value
when called. When a bash function completes, its return value is the status of the last statement executed
in the function, 0 for success and non-zero decimal number in the 1 - 255 range for failure.

The return status can be specified by using the return keyword, and it is assigned to the variable $?. The
return statement terminates the function.

#!/bin/bash

my_function ()
{
echo "Inside function"
return 50
}

my_function

echo $?

Output:

Inside function
50

Note: To actually return an arbitrary value from a function, it is necessary to use other methods. The
simplest option is to assign the result of the function to a global variable.

#!/bin/bash

#!/bin/bash

max()
{
read -p "Enter the first number: " num1
read -p "Enter the first number: " num2
if [ $num1 -gt $num2 ]
then
maxvalue=$num1;
else
maxvalue=$num2;
fi
}
max
echo "Maximum is : $maxvalue"

Output:

Enter the first number: 20


Enter the first number: 30
Maximum is : 30

Another, better option to return a value from a function is to send the value to stdout using echo or printf
like shown below:

74
#!/bin/bash

max ()
{
read -p "Enter the first number: " num1
read -p "Enter the first number: " num2
if [ $num1 -gt $num2 ]
then
echo $num1;
else
echo $num2;
fi

func_result="$(max)"
echo "Maximum is : $func_result"

Output:

Enter the first number: 20


Enter the first number: 30
Maximum is : 30

Instead of simply executing the function which will print the message to stdout, we are assigning the
function output to the func_result variable using the $() command substitution. The variable can later
be used as needed.

Passing Arguments to Bash Functions:

To pass any number of arguments to the bash function simply put them right after the function’s name,
separated by a space. It is a good practice to double-quote the arguments to avoid the mis parsing of an
argument with spaces in it.

• The passed parameters are $1, $2, $3 … $n, corresponding to the position of the parameter after
the function’s name.
• The $0 variable is reserved for the function’s name.
• The $# variable holds the number of positional parameters/arguments passed to the function.
• The $* and $@ variables hold all positional parameters/arguments passed to the function.

Example: Find the maximum of two numbers passed as arguments to function.

#!/bin/bash

max()
{
echo "Total number of arguments: $#"
echo "First argument is: $1"
echo "Second argument is: $2"

if [ $1 -gt $2 ]
then
maxvalue=$1;
else
maxvalue=$2;
fi

75
echo "Maximum value is : $maxvalue"
}
max 10 20

Output:

Total number of arguments: 2


First argument is: 10
Second argument is: 20
Maximum value is : 20

Nested Functions:

One of the more interesting features of bash functions is that they can call themselves and also other
functions. A function that calls itself is known as a recursive function.

Following example demonstrates nesting of two functions:

#!/bin/sh

# Calling one function from another


function_one ()
{
echo "Inside function one"
function_two
}

function_two ()
{
echo "Inside function two"
}

# Calling function one.


function_one

Output:

Inside function one


Inside function two

Exercise 1: Find the sum of arguments passed to functions

#!/bin/bash
sum()
{
sum=0
for ((i=1;i<=$#;i++))
do
((sum+=${!i}))
# ! is used for indirect expansion. If ! is used the value is substituted and
# the whole $1 , $2 etc is used as variable name
done
echo "The sum using indirection expansion is: $sum"

76
sum1=0
for ((i=1;i<=$#;i++))
do
eval "arg=\${$i}"
((sum1+=arg))
done
echo "The sum using eval is: $sum1"

sum2=0
for arg in $@
do
((sum2+=arg))
done
echo "The sum is: $sum2"
}
sum 10 20 30

Exercise 2: Write a recursive function to find the factorial of the given number.

#!/bin/bash
function fact()
{
n=$1
if [[ $n -eq 0 ]]
then
echo 1
else
echo $((n*$(fact $n-1)))
fi
}

read -p "Enter the number: " num


echo "$num!=$(fact $num)"

Command line arguments

Linux has a rich and powerful set of ways to supply parameters to bash scripts.

Argument List: Arguments can be passed to a bash script during the time of its execution, as a list,
separated by space following the script filename. This option is useful when a script has to perform
different functions depending on the values of the arguments input.

Below example passes three arguments (employee name, father name and age) to the emp.sh script.

$./emp.sh Raman Srinivasan 45

Using Single Quote: If the input list has arguments that comprise multiple words separated by spaces,
they need to be enclosed in single quotes.

For instance, in the above-mentioned example, if the second argument to be passed is Kishore Raman
instead of raman, it should be enclosed in single quotes and passed as ‘Kizhore Raman’.

$./emp.sh ‘Kizhore Raman’ Srinivasan 45

77
Using Double Quotes: Arguments that require evaluation must be enclosed in double-quotes before
passing them as input.

Consider a bash script filecopy.sh that takes in two arguments: A file name and the directory to copy it
to:

$./filecopy.sh emp.txt "$HOME"

Here, the $HOME variable gets evaluated to the user’s home directory, and the evaluated result is
passed to the script.

Escaping Special Characters: If the arguments that need to be passed have special characters, they need
to be escaped with backslashes:

$./printstrings.sh abc a@1 cd\$ 1\*2

The characters $ and * do not belong to the safe set and hence are escaped with backslashes.

Flags: Arguments can also be passed to the bash script with the help of flags. These flags are usually
single character letters preceded by a hyphen. The corresponding input value to the flag is specified
next to it separated by space.

Below example script emp.sh which takes 3 arguments: empname, fathername, and age:

$./emp.sh -e Raman -f Srinivasan -a 45

Here, the input to the script is specified using the flags (e, f and a) and the script processes this input by
fetching the corresponding values based on the flag.

Positional Parameters: Arguments passed to a script are processed in the same order in which they’re
sent. The indexing of the arguments starts at one, and the first argument can be accessed inside the script
using $1. Similarly, the second argument can be accessed using $2, and so on. The positional parameter
refers to this representation of the arguments using their position. The $# returns the number of
arguments passed to the script and $@ returns all the parameters as an array. The main script file name
is stored in $0

Let’s take an example of the following script, emp.sh, which prints empname, fathername and age in
that order:

#!/bin/bash

echo "Total number of arguments: $#"


echo "Name of the script: $0"
if [ $# -ne 3 ]
then
echo "Kindly pass three arguments employeename fathername and age"
else
echo "Employee Name: $1"
echo "Father Name: $2"
echo "Age: $3"
echo All arguments $@
fi

Output:

78
$./emp.sh
Total number of arguments: 0
Name of the script: ./cmd.sh
Kindly pass three arguments employeename fathername and age

$./emp.sh Raman Srinivasan 45


Total number of arguments: 3
Name of the script: ./cmd.sh
Employee Name: Raman
Father Name: Srinivasan
Age: 45
All arguments Raman Srinivasan 45

Flags:

Using flags is a common way of passing input to a script. When passing input to the script, there’s a
flag (usually a single letter) starting with a hyphen (-) before each argument.

Let’s take a look at the emp1.sh script, which takes three arguments: empname (-e), fathername (-f),
and age (-a).

The getopts function reads the flags in the input, and OPTARG refers to the corresponding values:

#!/bin/bash

while getopts e:f:a: flag


do
case $flag in
e) empname=${OPTARG};;
f) fathername=${OPTARG};;
a) age=${OPTARG};;
esac
done
echo "Employee Name: $empname";
echo "Father Name: $fathername";
echo "Age: $age";

Output:

$./emp1.sh -e Raman -f Srinivasan -a 45


Employee Name: Raman
Father Name: Srinivasan
Age: 45

Loop Construct:

Positional parameters are convenient in many cases, but they can’t be used when the input size is
unknown. The use of a loop construct comes in handy in these situations. The variable $@ is the array
of all the input parameters. Using this variable within a for loop, we can iterate over the input and
process all the arguments passed. Let’s take an example of the script users-loop.sh, which prints all the
parameters that have been passed as input:

#!/bin/bash
i=1

79
for param in "$@"
do
echo "Parameter - $i: $param";
i=$((i + 1));
done

Output:

$ ./cmd.sh Raman Srinivasan 45


Parameter - 1: Raman
Parameter - 2: Srinivasan
Parameter - 3: 45

In the above example, we’re iterating the param variable over the entire array of input parameters. This
iteration starts at the first input argument, Raman, and runs until the last argument, 45, even though the
size of the input is unknown.

Shift Operator:

Shift operator in bash (syntactically shift n, where n is the number of positions to move) shifts the
position of the command line arguments. The default value for n is one if not specified. The shift
operator causes the indexing of the input to start from the shifted position. In other words, when this
operator is used on an array input, the positional parameter $1 changes to the argument reached by
shifting n positions to the right from the current argument bound to positional parameter $1.

Consider an example script that determines whether the input is odd or even:

oddeven.sh 13 18 27 35 44 52 61 79 93

$1 refers to the first argument, which is 13. Using the shift operator with input 1 (shift 1) causes the
indexing to start from the second argument. That is, $1 now refers to the second argument (18).
Similarly, calling shift 2 will then cause the indexing to start from the fourth argument (35).

#!/bin/bash
i=1;
j=$#;

while [ $i -le $j ]
do
param=$1
echo -n "$param -> "
if [ $((param%2)) -eq 0 ]
then
echo "Even"
else
echo "Odd"
fi
i=$((i + 1))
shift 1
done

Output:

80
$ ./oddeven.sh 13 18 27 35 44 52 61 79 93
13 -> Odd
18 -> Even
27 -> Odd
35 -> Odd
44 -> Even
52 -> Even
61 -> Odd
79 -> Odd
93 -> Odd

In this example, we’re shifting the positional parameter in each iteration by one until we reach the end
of the input. Therefore, $1 refers to the next element in the input each time.

Example: Copy the text content and create a new file with the content.

#!/bin/bash
if [ $# -ne 2 ]; then
echo "please specify 2 command line arguments"
exit 1
fi
touch $1
echo $2 > $1

$ ./copy.sh hello_world.txt "Hello World!"

Note that we wrapped our second command line argument in quotes since it contains a space. The result
from executing the above command is a hello_world.txt file gets generated and contains the text we
specified:

$ cat hello_world.txt
Hello World!

String processing

String is an ordered sequence of characters. String manipulation is an operation on a string changing its
contents. In bash, string manipulation comes in two forms: pure bash string manipulation, and string
manipulation via external commands.

In bash shell, when we use a dollar sign followed by a variable name, shell expands the variable with
its value. This feature of shell is called parameter expansion. But parameter expansion has numerous
other forms which allow us to expand a parameter and modify the value or substitute other values in
the expansion process.

Variable assignment:

Strings can be assigned to a variable and later used in the script for further processing. For example,
below example creates a variable named "username" and printing the string to the console.

username="Ramu"
echo "$username"

Bash has no strong type system so if you assign a value to a variable, it will be treated as a string type.
It is possible to create strings with single, double, or no quotes. There is a difference between single

81
quotes and double quotes in bash. Single quotes prevent variable and command expansion while double
quotes support it. Take a look at the below example.

echo ‘Current user name is : $USER’


echo “Current user name is : $USER’

Output:
Current user name is : $USER
Current user name is : dagac

Length of the string:

To find the length of the string the # symbol is used.

echo "${#username}"
Output:
4

Converting strings to Array:

There are many ways to convert string data type to array type. The simplest way would be to enclose
the string inside curly brackets.

users=”root dagac ramu”


userarray=($users)
echo ${userarray[@]}
for element in ${userarray[@]}
do
echo $element
done

The second method would be to split the string and store it as an array based on the delimiter used in
the string. In the previous example, space is used as the field separator (IFS – Internal Field Separator)
which is the default IFS in bash. For example, if we have a comma-separated string we can set the IFS
to a comma and create an array.

STR_TO_ARR="column1,column2,column3"
IFS=","
ARR=(${STR_TO_ARR})
for element in ${ARR[@]}
do
echo $element
done
echo "${ARR[@]}"

Case conversion:

Bash has built-in support for string case conversion. Some special characters are used at the end of the
string for case conversion like as shown below.

,, ==> Converts an entire string to lowercase


^^ ==> Converts an entire string to Uppercase
~~ ==> Transpose Case
, ==> Converts first letter alone to lowercase
^ ==> Converts first letter alone to uppercase

82
# ---- LOWER TO UPPER CASE ----
L_TO_U="welcome to dagac"
echo ${L_TO_U^^}

# ---- UPPER TO LOWER CASE ----


U_TO_L="WELCOME TO DAGAC"
echo ${L_TO_U,,}

# ---- TRANSPOSE CASE ----


TRS_CASE="Welcome To Dagac"
echo ${TRS_CASE~~}

#---- FIRST LETTER TO LOWERCASE ----


F_TO_L="DAGAC"
echo ${F_TO_L,}

# ---- FIRST LETTER TO UPPERCASE ----


F_TO_U="dagac"
echo ${F_TO_U^}

We can also use regex pattern matching and convert the case for the matches.

L_TO_U="welcome to dagac"
echo ${L_TO_U^^[dag]}

String concatenation:

We can concatenate multiple strings by placing the strings one after another. Depending upon how our
strings are concatenated, we can add extra characters too.

Syntax:

var=${var1}${var2}${var3}
or
var=$var1$var2$var3
or
var="$var1""$var2""$var3"
To concatenate any character between the strings:

The following will insert "**" between the strings


var=${var1}**${var2}**${var3}
or
var=$var1**$var2**$var3
or
var="$var1"**"$var2"**"$var3"

The following concatenate the strings using space:


var=${var1} ${var2} ${var3}
or
var="$var1" "$var2" "$var3"
or
echo ${var1} ${var2} ${var3}

83
String slicing:

String slicing is a way of extracting a part of a string using the index position. Each character in the
string is assigned an Integer value with which can be used to grab a portion of the string. Index value
ranges from 0 to N. Below is the syntax for slicing.

{STRING:START:LENGTH}

START => Starting Index Position


LENGTH => Length of the String from position START

If LENGTH is not specified then the string will be printed till the end from the index position START.

greetmsg="Welcome to Dagac"
echo ${greetmsg:2}

With LENGTH given, it will print the substring from the START index position and how many
characters are to be printed.

echo ${greetmsg:2:2}

We can also reverse the string in many ways. The simplest way is to use the rev command. If we wish
to do this in a bash way without using any external command then you have to write the logic manually.

echo ${greetmsg} | rev

Search and replace

There is a native way to search and replace characters in a string without using any external command
like sed or awk.
To replace the first occurrence of substring, use the following syntax.

{STRING/X/Y}

The first occurrence of X will be replaced by Y.

Take a look at the below example where the first occurrence of the word "linux" is replaced with LINUX
in uppercase.

MESSAGE="linux is awesome to work with. Ubuntu is one of the powerful linux distribution"
echo $MESSAGE
echo ${MESSAGE/linux/LINUX}

To replace all the occurrences of the word, use the following syntax.

echo ${MESSAGE//linux/LINUX}

Remove substring:

There are different ways to remove substring from the string. External utilities like sed, awk, or tr can
be used or there is a way to do it in bash native way.

In the bash native way, parameter expansion is used to remove the substring. We have to use the %
symbol followed by a pattern to remove. This will match the last found pattern and remove it.

84
For example, to remove the words that come after the period (.) following syntax should be used. Here
whatever comes after the period (.) will be removed. In this case, the last matched pattern .com is
removed.

url="www.dagac.com"
echo ${url%.*}

To match the first occurrence of the pattern, use double percentage symbol

url="www.dagac.com"
echo ${url%%.*}

Weu can also use the # or ## symbol which will do a kind of inverse delete. With a single # symbol,
the first pattern is matched and everything before the pattern is deleted.

url="www.dagac.com"
echo ${url#*.}

With the double ## symbol, the last pattern is matched and everything before the pattern is deleted.

url="www.dagac.com"
echo ${url##*.}

Reverse a string:

read - p "Enter string:" string


len= ${#string}
for ((i = $len - 1; i >= 0; i--))
do
reverse = "$reverse${string:$i:1}"
done
echo "$reverse"

String manipulation using external commands:

To manipulate strings in bash using an external command, we need to use a feature that the bash manual
calls command substitution. In short, whatever is inside $( ) or ` ` is treated as a command and
substituted in place. The easy way to use command substitution is to assign the result of command
substitution to a variable as follows.

Commands

result=$( command )

In the case of string manipulation using an external command in bash, we would need to pipe the echo
of a string to the command, unless passing the string to the command as a parameter is accepted. Here
is what the new result should look like.

Commands

result=$( echo "${result}" | command )

Example 1: Reverse a string using rev

85
read - p "Enter string:" string
echo $string | rev

Now, let’s try doing something real. However, reduce a string containing words to the last word in the
string? For this example, let’s use the external command awk.

Notes on the following commands. Let’s make everything lowercase and get rid of periods. The quote
is by Linus Torvalds. It is a really popular quote.

#!/bin/bash
quote="Talk is cheap. Show me the code.";
last_word=$( echo "${quote//./}" | awk '{print $(NF)}' );
echo "${last_word,,}"

Output

code

Process basics

When we execute a program on our Linux system, the system creates a special environment for that
program. This environment contains everything needed for the system to run the program as if no other
program were running on the system. Whenever we issue a command in Unix, it creates, or starts, a
new process. When we tried out the ls command to list the directory contents, we started a process. A
process, in simple terms, is an instance of a running program.

The operating system tracks processes through a unique ID number known as the pid or the process ID.
Each process in the system has a unique pid. Pids eventually repeat because all the possible numbers
are used up and the next pid rolls or starts over. At any point of time, no two processes with the same
pid exist in the system because it is the pid that Linux uses to track each process.

Starting a Process:

It is possible to start a process in two ways:

• Foreground Processes
• Background Processes

Foreground Processes:

By default, every process that we start runs in the foreground. It gets its input from the keyboard and
sends its output to the screen. We can see this happen with the ls command. If we wish to list all the
shell scripts in the current directory, we can use the following command:

$ls *.sh

This would display all the script files with the extension sh.

86
The process runs in the foreground, the output is directed to the screen, and if the command wants any
input, it waits for it from the keyboard. While a program is running in the foreground and is time-
consuming, no other commands can be run (start any other processes) because the prompt would not be
available until the program finishes processing and comes out.

Background Processes:

A background process runs without being connected to our keyboard. If the background process
requires any keyboard input, it waits. The advantage of running a process in the background is that a
user can run other commands; user do not have to wait until it completes to start another.

The simplest way to start a background process is to add an ampersand (&) at the end of the command.

$ls *.sh &

$./add.sh &

This displays all those files the names of which end with .sh.

Here, if the command wants any input, it goes into a stop state until we move it into the foreground and
give it the data from the keyboard. That first line contains information about the background process -
the job number and the process ID. We need to know the job number to manipulate it between the
background and the foreground.

Press the Enter key and we will see the following −

[1] 4986

dagac@dagac-VirtualBox:~/shell$ arith1.sh casestmt1.sh for.sh if.sh specialvar.sh

arith2.sh casestmt.sh fun1.sh index.sh string.sh

arithlogic.sh cmd.sh fun2.sh logic1.sh str.sh

arith.sh deletezero.sh fun.sh logic.sh sum.sh

array.sh ex3.sh greatcmd.sh multable.sh tables.sh

asso.sh fileelif.sh greatest.sh read.sh until.sh

bitwise.sh file.sh if1.sh relat.sh while.sh

The first line tells us that the ls command background process finishes successfully. The second is a
prompt for another command.

Listing Running Processes:

It is easy to see our own processes by running the ps (process status) command as follows:

87
$ps

PID TTY TIME CMD

18358 ttyp3 00:00:00 bash

18361 ttyp3 00:01:31 test.sh

18789 ttyp3 00:00:00 ps

One of the most commonly used flags for ps is the -f ( f for full) option, which provides more
information as shown in the following example:

$ps -f

UID PID PPID C STIME TTY TIME CMD

dagac 6738 3662 0 10:23:03 pts/6 0:00 first_one

dagac 6739 3662 0 10:22:54 pts/6 0:00 second_one

dagac 3662 3657 0 08:10:53 pts/6 0:00 -bash

dagac 6892 3662 4 10:51:50 pts/6 0:00 ps -f

Here is the description of all the fields displayed by ps -f command −

UID - User ID that this process belongs to (the person running it)

PID - Process ID

PPID - Parent process ID (the ID of the process that started it)

C - CPU utilization of process

STIME - Process start time

TTY - Terminal type associated with the process

TIME - CPU time taken by the process

CMD - The command that started this process

There are other options which can be used along with ps command −

-a - Shows information about all users

-x - Shows information about processes without terminals

-u - Shows additional information like -f option

-e - Displays extended information

88
Stopping Processes:

Ending a process can be done in several different ways. Often, from a console-based command, sending
a CTRL + C keystroke (the default interrupt character) will exit the command. This works when the
process is running in the foreground mode. If a process is running in the background, we should get its
process ID using the ps command. After that, we can use the kill command to kill the process as follows.

$ps -f

UID PID PPID C STIME TTY TIME CMD

amrood 6738 3662 0 10:23:03 pts/6 0:00 first_one

amrood 6739 3662 0 10:22:54 pts/6 0:00 second_one

amrood 3662 3657 0 08:10:53 pts/6 0:00 -bash

amrood 6892 3662 4 10:51:50 pts/6 0:00 ps -f

$kill 6738

Terminated

Here, the kill command terminates the first_one process. If a process ignores a regular kill command,
we can use kill -9 followed by the process ID as follows.

$kill -9 6738

Terminated

Parent and Child Processes:

Each Linux process has two ID numbers assigned to it: The Process ID (pid) and the Parent process ID
(ppid). Each user process in the system has a parent process. Most of the commands that we run have
the shell as their parent. Check the ps -f example where this command listed both the process ID and
the parent process ID.

Zombie and Orphan Processes:

Normally, when a child process is killed, the parent process is updated via a SIGCHLD signal. Then
the parent can do some other task or restart a new child as needed. However, sometimes the parent
process is killed before its child is killed. In this case, the "parent of all processes," the init process,
becomes the new PPID (parent process ID). In some cases, these processes are called orphan processes.

When a process is killed, a ps listing may still show the process with a Z state. This is a zombie or
defunct process. The process is dead and not being used. These processes are different from the orphan
processes. They have completed execution but still find an entry in the process table.

Daemon Processes

Daemons are system-related background processes that often run with the permissions of root and
services requests from other processes. A daemon has no controlling terminal. It cannot open /dev/tty.
If we do a "ps -ef" and look at the tty field, all daemons will have a ? for the tty.

89
To be precise, a daemon is a process that runs in the background, usually waiting for something to
happen that it is capable of working with. For example, a printer daemon waiting for print commands.
If we have a program that calls for lengthy processing, then it’s worth to make it a daemon and run it in
the background.

The top Command:

The top command is a very useful tool for quickly showing processes sorted by various criteria. It is an
interactive diagnostic tool that updates frequently and shows information about physical and virtual
memory, CPU usage, load averages, and our busy processes. Here is the simple syntax to run top
command and to see the statistics of CPU utilization by different processes.

$top

Job ID Versus Process ID:

Background and suspended processes are usually manipulated via job number (job ID). This number is
different from the process ID and is used because it is shorter. In addition, a job can consist of multiple
processes running in a series or at the same time, in parallel. Using the job ID is easier than tracking
individual processes.

Linux Command Related with Processes:

Following tables most commonly used command(s) with process:

Purpose Usage Examples*


To see currently running process ps $ ps
To stop any process by PID i.e. to kill kill {PID} $ kill 1012
process
To stop processes by name i.e. to kill killall {Process-name} $ killall httpd
process
To get information about all running ps -ag $ ps -ag
process
To stop all process except your shell kill 0 $ kill 0
For background processing (With &,
use to put particular command and linux-command & $ ls / -R | wc -l &
program in background)
To display the owner of the processes
ps aux $ ps aux
along with the processes
For e.g. you want to see whether
To see if a particular process is
Apache web server process is
running or not. For this purpose you ps ax | grep process-U-want-
running or not then give
have to use ps command in to see
command
combination with the grep command
$ ps ax | grep httpd
To see currently running processes
and other information like memory top $ top
and CPU usage with real time
updates. Note that to exit from top
To display a tree of processes pstree $command
pstree press q.

90
Filter commands

The Linux filter commands will read the standard inputs, performs the necessary actions or operations
on top of it to convert the input to a meaningful form and writes the end result to the standard output
format. It will help to process the information in a very dominant way such as shell jobs, modify the
live data, reporting, etc.

Syntax of filter command:


[filter method] [filter option] [data | path | location of data]

1. cat: Display the text of the file line by line.

Syntax:
cat [path]
Example:
cat fruits.txt
1)Banana 2)Grape 3)Orange 4)Apple 5)Pineapple
6)Papaya 7)Blueberry 8)Mango
9)Dragonfruit 10)Butterfruit 11)Plum

2. head: Shows the first n lines of the specified text file. If no number of lines is specified, the first 10
lines are print by default.

Syntax:
head [-number_of_lines_to_print] [path]
Example:
head -2 fruits.txt
1)Banana 2)Grape 3)Orange 4)Apple 5)Pineapple
6)Papaya 7)Blueberry 8)Mango

3. tail: Works the same as the head, but in reverse order. The only difference from Tail is that the line
is return from bottom to top.

Syntax:
tail [-number_of_lines_to_print] [path]
Example:
tail -2 fruits.txt
6)Papaya 7)Blueberry 8)Mango
9)Dragonfruit 10)Butterfruit 11)Plum

4. sort: Sorts the rows alphabetically by default, but there are many options available to change the
sorting mechanism. Default is ascending. If we specify -r sorting will be happened in reverse order.

Syntax:
sort [-options] [path]
Example:
sort -r fruits.txt
9)Dragonfruit 10)Butterfruit 11)Plum
6)Papaya 7)Blueberry 8)Mango
1)Banana 2)Grape 3)Orange 4)Apple 5)Pineapple

5. uniq: Remove duplicate lines. uniq has the limitation that we can only remove consecutive duplicate
rows. If we want to display the number of occurrences of a line or record in the input file or data, we
can use the “-c” keyword in the uniq command.

91
Syntax:
uniq [options] [path]
Example:
uniq -c fruits.txt

6. wc: wc command gives the number of lines, words and characters in the data.

Syntax:
wc [-options] [path]
Example:
dagac@dagac-VirtualBox:~/shell$ wc fruits.txt
3 11 112 fruits.txt
dagac@dagac-VirtualBox:~/shell$ wc -m fruits.txt
112 fruits.txt
dagac@dagac-VirtualBox:~/shell$ wc -l fruits.txt
3 fruits.txt
dagac@dagac-VirtualBox:~/shell$ wc -w fruits.txt
11 fruits.txt

7. grep: grep is used to search a particular partern from a text file.

Syntax:
grep [options] pattern [path]
Example:

8. tac: tac is the opposite of a cat and works the same. Instead of printing lines 1 through n, print lines
n through line 1. This is the opposite of the cat command.

Syntax:
tac [path]
Example:
tac fruits.txt

9. nl: nl is use to number the lines of our text data.

Syntax:
nl [-options] [path]
Example:

10. tee: tee command is used to copy standard input to standard output and making a copy to one or
more files.

Syntax:
tee [option] file
Example:
echo ‘yes’ | tee yes.txt

92
Option
-a append to the given file, do not overwrite
-i ignore interrupt signal

11. cut: The cut command is used to select section of text from each lines of files. It can filter selected
column from files, depending on a delimiter or a count of bytes.

Syntax:
cut [option] file name
Option
-b select only the byte from each line a specified in list.
-c select only the characters from each line as specific in list.
-f select only these field on each line.
-s do not print lines not containing delimiters
Example:
dagac@dagac-VirtualBox:~/shell$ cut -b 1,2,3 fruits.txt
1)B
6)P
9)D
dagac@dagac-VirtualBox:~/shell$ cut -b 1-3,5-7 fruits.txt
1)Bnan
6)Ppay
9)Dago
cut -c 1,2 fruits.txt
cut -c 1-5 fruits.txt
dagac@dagac-VirtualBox:~/shell$ cut -d " " -f 1 fruits.txt
1)Banana
6)Papaya
9)Dragonfruit

12. fmt: The fmt is simple and optimal text formatter. It is helpful to reformat the input data and print
in end result in the standard output format.

Example:
dagac@dagac-VirtualBox:~/shell$ fmt -w 1 fruits.txt
1)Banana
2)Grape
3)Orange
4)Apple
5)Pineapple
6)Papaya
7)Blueberry
8)Mango
9)Dragonfruit
10)Butterfruit
11)Plum

13. more: The more command is useful for file analysis. It will read the big size file. It will display the
large file data in page format. The page down and page up key will not work. To display the new record,
we need to press “enter” key.

Example:
cat fruits.txt | more

93
14. less: The less command is like more command but it is faster with large files. It will display the
large file data in page format. The page down and page up key will work. To display the new record,
we need to press “enter” key.

Example:

cat fruits.txt | less

15. sed: For filtering and transforming text data, sed is a very powerful stream editor utility. With sed
we can do all of the following:

• Select text
• Substitute text
• Add lines to text
• Delete lines from text
• Modify (or preserve) an original file

Example:
sed -n '1,2p' fruits.txt
Note: There is a comma between 1 and 4. The p means “print matched lines.” By default, sed prints
all lines.

16. find: The find filter command is useful to find the files from the Linux operating system.

Example:

find / -name "file.txt"

17. pr: The pr command is useful to convert the input data into the printable format with proper
column structure.

Example:
pr -T -2 fruits.txt
Note: -T omits pagination (header and footer) and -2 denotes two columns format.

dagac@dagac-VirtualBox:~/shell$ pr -T -2 fruits.txt
1)Banana 2)Grape 3)Orange 4)Apple 5 9)Dragonfruit 10)Butterfruit 11)Plu
6)Papaya 7)Blueberry 8)Mango

18. tr: The tr command will translate or delete characters from the input string or data. We can
transform the input data into upper or lower case.

Example:
echo "www.dagac.com" | tr [:lower:] [:upper:]

19. awk: In awk, we are having the functionality to read the file. But here, we are using integer
variables(parameter) to read the file.

Below are the integer information and meaning if they are using in awk commands.

$0: read the complete file or input text.


$1: read the first field.
$2: read the second field.
$n: read the nth field.

94
Note: If we are not using any separator keyword “-F” in the awk command. Then the awk command
considers the space in the text file or input data and separates the result as per the $variable provided
in the same awk command.

Example:

cat file.txt | awk '{print $1}'

In file.txt, we are having the sample data.


If we use $0 then entire data will be read.
If we use $1 then first column data will be filtered out.

95
UNIT - V
Basic System administration: Super User Control – Scheduling tasks using cron – System run levels –
Configuration directories and files – User configuration files – Adding and Removing Users and Groups

Linux is designed to serve many users at the same time, providing an interface between the users and
the system with its resources, services, and devices. Users have their own shells through which they
interact with the operating system, but it may need to configure the operating system itself in different
ways. We may need to add new users, devices like printers and scanners, and even file systems. Such
operations come under the heading of system administration. The person who performs such actions is
referred to as either a system administrator or a superuser. There are two types of interaction with
Linux: regular users’ interactions, and those of the superuser, who performs system administration
tasks.

Superuser Control: The Root User

To perform system administration operations, the specific user must first have access rights, such as the
correct password, that enable the user to log in as the root user, making the user as the superuser.
Because a superuser has the power to change almost anything on the system, such a password is usually
a carefully guarded secret, changed very frequently, and given only to those whose job it is to manage
the system. With the correct password, user can log in to the system as a system administrator and
configure the system in different ways. Administrator can start up and shut down the system, as well as
change to a different operating mode, such as a single-user mode. Administrator can also add or remove
users, add or remove whole file systems, backup and restore files, and even designate the system's name
and address.

To become a superuser, we must log in to the root user account. This is a special account reserved for
system management operations with unrestricted access to all components of our Linux operating
system. We can log in as the root user from either the GUI (graphical user interface) login screen or the
command line login prompt. We then have access to all administrative tools. Using a GUI interface like
GNOME, the root user has access to a number of distribution GUI administrative tools. If we log in
from the command line interface, we can run corresponding administrative commands like rpm or apt-
get to install packages or useradd to add a new user. From our GUI desktop, we can also run command
line administrative tools using a terminal window. The command line interface for the root user uses a
special prompt, the sharp sign, # in some distributions and some distributions uses the $.

In the next example, the user logs in to the system


as the root user and receives the prompt.
login: root
password:
$

Root User Password

As the root user, we can use the passwd command to change the password for the root login, as well as
for any other user on the system. The passwd command will check our password with Pluggable
Authentication Modules (PAM), to see if you’ve selected one that can be easily cracked.
$ passwd root
New password:
Re-enter new password:
$
We must take precautions to protect your root password. Anyone who gains access as the root user will
have complete control over our system. We never store our password in a file on our system, and never
choose one based on any accessible information, such as our phone number or date of birth.

96
A basic guideline is to make our password as complex as possible, using a phrase of several words with
numbers and upper- and lowercase letters, yet something we can still remember easily so that we never
have to write it down. We can access the passwd online manual page with the command

$ man passwd

Root User Access: su

While we are logged in to a regular user account, it may be necessary for us to log in as the root and
become a superuser. Ordinarily, we would have to log out of our user account first, and then log in to
the root. Instead, we can use the su command (switch user) to log in directly to the root while remaining
logged in to our user account.

TABLE 5.1 Basic System Administration Tools

A CTRL-D or exit command returns us to our own user login. When we are logged in as the root, we
can use su to log in as any user, without providing the password. In the next example, the user is logged
in already. The su command then logs in as the root user, making the user a superuser. Some basic
superuser commands are shown in Table 5.1.

$ pwd
/home/chris
$su
password:
# cd
# pwd
/root
# exit
$

Note: For security reasons, Linux distributions do not allow the use of su in a Telnet session to access
the root user. For SSH- or Kerberos-enabled systems, secure login access is provided using slogin
(SSH) and rlogin (Kerberos version).

Controlled Administrative Access: sudo

With the sudo tool we can allow ordinary users to have limited root user–level administrative access
for certain tasks. This allows other users to perform specific superuser operations without having full
root level control. To use sudo to run an administrative command, the user precedes the command with
the sudo command. The user is issued a time-sensitive ticket to allow access.

sudo date

The first time we issue a sudo command during a login session, we will be prompted to enter our
administrative password. Access is controlled by the /etc/sudoers file. This file lists users and the

97
commands they can run, along with the password for access. If the NOPASSWD option is set, then
users will not need a password.

NOTE Some distributions like Ubuntu deny direct root access by default, and only allow administrative
root access through sudo commands.

To make changes or add entries, we have to edit the file with the special sudo editing command visudo.
This invokes the Vi editor to edit the /etc/sudoers file. Unlike a standard editor, visudo will lock the
/etc/sodoers file and check the syntax of our entries. We are not allowed to save changes unless the
syntax is correct. If we want to use a different editor, we can assign it to the EDITOR shell variable.

A sudoers entry has the following syntax:

user host=command

The host is a host on our network. We can specify all hosts with the ALL term. The command can be a
list of commands, some or all qualified by options such as whether a password is required. To specify
all commands, we can also use the ALL term. The following gives the user george full root-level access
to all commands on all hosts:
george ALL = ALL

In addition, we can let a user run as another user on a given host. Such alternate users are placed within
parentheses before the commands. For example, if we want to give george access to the beach host as
the user mydns, you use the following:
george beach = (mydns) ALL

By default sudo will deny access to all users, including the root. For this reason, the default /etc/sudoers
file sets full access for the root user to all commands. The ALL=(ALL) ALL entry allows access by
the root to all hosts as all users to all commands.
root ALL=(ALL) ALL
To specify a group name, we prefix the group with a % sign, as in %mygroup. This way, we can give
the same access to a group of users. The /etc/sudoers file contains samples for a %wheel group. To
give robert access on all hosts to the date command, we use
robert ALL=/usr/bin/date

If a user wants to see what commands he or she can run, that user uses the sudo command with the -l
option.
sudo -l

98
Scheduling Tasks: cron

Cron is a time-based job scheduling daemon found in Unix-like operating systems, including Linux
distributions. Cron runs in the background and operations scheduled with cron, referred to as “cron
jobs,” are executed automatically, making cron useful for automating maintenance-related tasks. It is
implemented by a cron daemon. A daemon is a continually running server that constantly checks for
certain actions to take. These tasks are listed in the crontab file. The cron daemon constantly checks
the user’s crontab file to see if it is time to take these actions. Any user can set up a crontab file of his
or her own. The root user can set up a crontab file to take system administrative actions, such as backing
up files at a certain time each week or month.

crontab Entries

A crontab entry has six fields: the first five are used to specify the time for an action, while the last
field is the action itself.
First field - specifies minutes (0–59)
Second field - specifies the hour (0–23)
Third field - specifies the day of the month (1–31)
Fourth field - specifies the month of the year (1–12, or month prefixes like Jan and Sep)
Fifth field - specifies the day of the week (0–6, or day prefixes like Wed and Fri), starting with 0 as
Sunday.

In each of the time fields, we can specify a range, specify a set of values, or use the asterisk to indicate
all values. For example, 1-5 for the day-of-week field specifies Monday through Friday. In the hour
field, 8, 12, 17 would specify 8 A.M., 12 noon, and 5 P.M. An * in the month-of-year field indicates
every month. The format of a crontab field follows:

minute hour day-month month day(s)-week task

The following example backs up the projects directory at 2:00 A.M. every weekday:

0 2 * * 1-5 tar cf /home/backup /home/projects

The same entry is listed here again using prefixes for the month and weekday:

0 2 * * Mon-Fri tar cf /home/backup /home/projects

To specify particular months, days, weeks, or hours, we can list them individually, separated by
commas. For example, to perform the previous task on Sunday, Wednesday, and Friday, we can use
0,3,5 in the day-of-week field, or their prefix equivalents, Sun,Wed,Fri.

0 2 * * 0,3,5 tar cf /home/backup /home/projects

cron also supports comments. A comment is any line beginning with a # sign.

# Weekly backup for our projects


0 2 * * Mon-Fri tar cf /home/backup /home/projects

The cron.d Directory

On a heavily used system, the /etc/crontab file can become crowded easily. There may also be instances
where certain entries require different variables. For example, you may need to run some tasks under a
different shell. To help better organize your crontab tasks, you can place crontab entries in files within
the cron.d directory. The files in the cron.d directory all contain crontab entries of the same format as
/etc/crontab. They may be given any name. They are treated as added crontab files, with cron checking

99
them for tasks to run. For example, Linux installs a sysstat file in the cron.d that contains crontab
entries to run tools to gather system statistics.

The crontab Command

You use the crontab command to install your entries into a crontab file. To do this, first create a text
file and type your crontab entries. Save this file with any name you want, such as mycronfile. Then,
to install these entries, enter crontab and the name of the text file. The crontab command takes the
contents of the text file and creates a crontab file in the /var/spool/cron directory, adding the name of
the user who issued the command. In the following example, the root user installs the contents of
mycronfile as the root’s crontab file. This creates a file called /var/spool/cron/root. If a user named
dagac installs a crontab file, it creates a file called /var/spool/cron/dagac. You can control use of the
crontab command by regular users with the /etc/cron.allow file. Only users with their names in this
file can create crontab files of their own. Conversely, the /etc/cron.deny file lists those users denied
use of the cron tool, preventing them from scheduling tasks. If neither file exists, access is denied to all
users. If a user is not in an /etc/cron.allow file, access is denied. However, if the /etc/cron.allow file
does not exist, and the /etc/cron.deny file does, then all users not listed in /etc/cron.deny are
automatically allowed access.

$ crontab mycronfile

Editing in cron

Never try to edit your crontab file directly. Instead, use the crontab command with the -e option. This
opens your crontab file in the /var/spool/cron directory with the standard text editor, such as Vi
(crontab uses the default editor as specified by the EDITOR shell environment variable). To use a
different editor for crontab, change the default editor by assigning the editor’s program name to the
EDITOR variable and exporting that variable. Normally, the EDITOR variable is set in the /etc/profile
script. Running crontab with the -l option displays the contents of your crontab file, and the -r option
deletes the entire file. Invoking crontab with another text file of crontab entries overwrites your current
crontab file, replacing it with the contents of the text file.

$crontab -e

Organizing Scheduled Tasks

You can organize administrative cron tasks into two general groups: common administrative tasks that
can be run at regular intervals, or specialized tasks that need to be run at a unique time. Unique tasks
can be run as entries in the /etc/crontab file, as described in the next section. Common administrative
tasks, though they can be run from the /etc/crontab file, are better organized into specialized cron
directories. Within such directories, each task is placed in its own shell script that will invoke the task
when run. For example, there may be several administrative tasks that all need to be run each week on
the same day, say if maintenance for a system is scheduled on a Sunday morning. For these kinds of
tasks, cron provides several specialized directories for automatic daily, weekly, monthly, and yearly
tasks. Each contains a cron prefix and a suffix for the time interval. The /etc/cron.daily directory is
used for tasks that need to be performed every day, whereas weekly tasks can be placed in the
/etc/cron.weekly directory.

The cron directories are listed in Table 5.2

100
TABLE 5.2 cron Command, Tools, Files and Directories

Note: K desktop environment (KDE) is a desktop working platform with a graphical user interface
(GUI) released in the form of an open-source package. GNOME originally an acronym for GNU
Network Object Model Environment, is a free and open-source desktop environment for Linux
operating systems.

Running cron Directory Scripts

Each directory contains scripts that are all run at the same time. The scheduling for each group is
determined by an entry in the /etc/crontab file. The actual execution of the scripts is performed by the
/usr/bin/run-parts script, which runs all the scripts and programs in a given directory. Scheduling for
all the tasks in a given directory is handled by an entry in the /etc/crontab file. Linux provides entries
with designated times, which you may change for your own needs. The default crontab file is shown
here, with times for running scripts in the different cron directories. Here you can see that most scripts
are run at about 4 A.M. either daily (4:02), Sunday (4:22), or first day of each month (4:42). Hourly
scripts are run one minute after the hour.

SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly

TIP Scripts within a cron directory are run alphabetically. If you need a certain script to run before
any others, you may have to alter its name. One method is to prefix the name with a numeral. For
example, in the /cron.weekly directory, the anacron script is named 0anacron so that it will run before
any others. Keep in mind though that these are simply directories that contain executable files. The
actual scheduling is performed by the entries in the /etc/crontab file. For example, if the weekly field
in the cron.weekly crontab entry is changed to * instead of 0, and the monthly field to 1 (22 4 1 * *
instead of 22 4 * * 0), tasks in the cron.weekly file will end up running monthly instead of weekly.

101
Installing Cron:

Almost every Linux distribution has some form of cron installed by default. However, if we are using
an Ubuntu machine on which cron isn’t installed, we can install it using APT.

Before installing cron on an Ubuntu machine, update the computer’s local package index:

$sudo apt update

Then install cron with the following command:

$sudo apt install cron

It is necessary to make sure it’s set to run in the background too:

$sudo systemctl enable cron

Output:

Synchronizing state of cron.service with SysV service script with /lib/systemd/systemd-sysv-install.


Executing: /lib/systemd/systemd-sysv-install enable cron
Following that, cron will be installed on your system and ready for you to start scheduling jobs.

There are 4 ways to check status of cron service in Ubuntu:

ps -ef | grep cron

systemctl status cron.service

service cron status

/etc/init.d/cron status

102
System Runlevels: telinit, initab, and shutdown

A run level is a state of init and the whole system that defines what system services are operating. Run
levels are identified by numbers. Some system administrators use run levels to define which subsystems
are working, e.g., whether X is running, whether the network is operational, and so on.

• Whenever a LINUX system boots, firstly the init process is started which is actually responsible
for running other start scripts which mainly involves initialization of our hardware, bringing
up the network, starting the graphical interface.
• Now, the init first finds the default runlevel of the system so that it could run the start scripts
corresponding to the default run level.
• A runlevel can simply be thought of as the state your system enters like if a system is in a
single-user mode it will have a runlevel 1 while if the system is in a multi-user mode it will
have a runlevel 5.
• A runlevel in other words can be defined as a preset single digit integer for defining the
operating state of your LINUX or UNIX-based operating system. Each runlevel designates a
different system configuration and allows access to different combination of processes.

The important thing to note here is that there are differences in the runlevels according to the operating
system. The standard LINUX kernel supports these seven different runlevels:

0 – System halt i.e the system can be safely powered off with no activity.
1 – Single user mode.
2 – Multiple user mode with no NFS (network file system).
3 – Multiple user mode under the command line interface and not under the graphical user interface.
4 – User-definable.
5 – Multiple user mode under GUI (graphical user interface) and this is the standard runlevel for most
of the LINUX based systems.
6 – Reboot which is used to restart the system.

By default, most of the LINUX based system boots to runlevel 3 or runlevel 5. In addition to the standard
runlevels, users can modify the preset runlevels or even create new ones according to the requirement.
Runlevels 2 and 4 are used for user defined runlevels and runlevel 0 and 6 are used for halting and
rebooting the system. Obviously, the start scripts for each run level will be different performing different
tasks. These start scripts corresponding to each run level can be found in special files present under rc
sub directories.

At /etc/rc.d or in /etc/ directory there will be either a set of files named rc.0, rc.1, rc.2, rc.3, rc.4, rc.5
and rc.6, or a set of directories named rc0.d, rc1.d, rc2.d, rc3.d, rc4.d, rc5.d and rc6.d.
For example, run level 1 will have its start script either in file /etc/rc.d/rc.1 or any files in the directory
/etc/rc.d/rc1.d.

Runlevels in initab

When your system starts up, it uses the default runlevel as specified in the default init entry in the
/etc/inittab file. For example, if your default init runlevel is 5 (the graphical login), the default init
entry in the /etc/inittab file would be
id:5:initdefault:
You can change the default runlevel by editing the /etc/inittab file and changing the init default entry.
Editing the /etc/inittab file can be dangerous. You should do this with great care. As an example, if the
default runlevel is 3 (command line), the entry for your default runlevel in the /etc/inittab file should
look like the following:
id:3:initdefault:

103
You can change the 3 to a 5 to change your default runlevel from the command line
interface (3) to the graphical login (5). Change only this number and nothing else.
id:5:initdefault:

Note: Latest Debian distributions uses /etc/init/rc-sysinit.conf configuration. It


will not use etc/inittab.

Changing Runlevels with telinit

No matter what runlevel we start in, you can change from one runlevel to another with the telinit
command. If our default runlevel is 3, we power up in runlevel 3, but we can change to, say, runlevel 5
with telinit 5. The command telinit 0 shuts down our system. In the next example, the telinit command
changes to runlevel 1, the administrative state:

$ telinit 1

On Red Hat, Fedora, SUSE, and similar distributions, one common use of telinit to change runlevels is
when you need to install a software package that requires that the X server be shut down. This is the
case with the graphics drivers obtained directly from Nvidia or ATI. You first have to change to runlevel
3 with a telinit 3 command, shutting down the X server, and then you can install the graphics driver.
$telinit 3
After installation, you can return to the X server and its GUI interface with the telinit 5 command.
$telinit 5

The runlevel Command

Use the runlevel command to see what state you are currently running in. It lists the previous state
followed by the current one. If you have not changed states, the previous state will be listed as N,
indicating no previous state. This is the case for the state you boot up in. In the next example, the system
is running in state 3, with no previous state change:
$ runlevel
N2
Alternatively, you can run this command:
$ who -r

Shutdown

Although you can power down the system with the telinit command and the 0 state, you can also use
the shutdown command. The shutdown command has a time argument that gives users on the system
a warning before you power down. You can specify an exact time to shut down or a period of minutes
from the current time. The exact time is specified by hh:mm for the hour and minutes. The period of
time is indicated by a + and the number of minutes. The shutdown command takes several options with
which you can specify how you want your system shut down. The -h option, which stands for halt,
simply shuts down the system, whereas the -r option shuts down the system and then reboots it. In the
next example, the system is shut down after ten minutes:

$ shutdown -h +10

To shut down the system immediately, you can use +0 or the word now. The following example shuts
down the system immediately and then reboots:

$ shutdown -r now

With the shutdown command, you can include a warning message to be sent to all users currently
logged in, giving them time to finish what they are doing before you shut them down.

104
$ shutdown -h +5 "System needs a restart"
Configuration directories and files:

System Directories

Linux file system is organized into directories where files are used for different system functions (see
Table 5-5). For basic system administration, familiarity is necessary with the system program
directories where applications are kept, the system configuration directory (/etc) where most
configuration files are placed, and the system log directory (/var/log) that holds the system logs,
recording activity on our system.

TABLE 5.3 System Directories

Program Directories

Directories with “bin” in the name are used to hold programs. The /bin directory holds basic user
programs, such as login shells (BASH, TCSH, and ZSH) and file commands (cp, mv, rm, ln, and so
on). The /sbin directory holds specialized system programs for such tasks as file system management
(fsck, fdisk, mkfs) and system operations like shutdown and startup (init). The /usr/bin directory holds
program files designed for user tasks. The /usr/ sbin directory holds user-related system operations,
such as useradd to add new users. The /lib directory holds all the libraries your system uses, including
the main Linux library, libc, and subdirectories such as modules, which holds all the current kernel
modules.

Configuration Directories and Files

When we configure different elements of your system, such as users, applications, servers, or network
connections, we use configuration files kept in certain system directories. Configuration files are placed
in the /etc directory.

Configuration Files: /etc

The /etc directory holds our system, network, server, and application configuration files. Here we can
find the fstab (file system table) file listing our file systems, the hosts file with IP addresses for hosts

105
on our system, and /etc/profile, the system wide default BASH shell configuration file. This directory
includes various subdirectories, such as /etc/apache for the Apache web server configuration files,
/etc/X11 for the X Window System and window manager configuration files, and /etc/udev for rules to
generate device files in /dev. We can configure many applications and services by directly editing their
configuration files, though it is best to use a corresponding administration tool. Table 5.4 lists several
commonly used configuration files found in the /etc directory.

TABLE 5.4 Common System Configuration Files and Directories

System Logs: /var/log and syslogd

Various system logs for tasks performed on our system are stored in the /var/log directory. Here we
can find logs for mail, news, and all other system operations, such as web server logs. The
/var/log/messages file is a log of all system tasks not covered by other logs. This usually includes
startup tasks, such as loading drivers and mounting file systems. If a driver for a card failed to load at
startup, we will find an error message for it here. Logins are also recorded in this file, showing you who
attempted to log in to what account. The /var/log/maillog file logs mail message transmissions and
news transfers.

NOTE To view logs, you can use the GNOME System Log Viewer.

syslogd and syslog.conf

The syslogd daemon manages all the logs on our system and coordinates with any of the logging
operations of other systems on our network. Configuration information for syslogd is held in the

106
/etc/syslog.conf file, which contains the names and locations for our system log files. Here we find
entries for /var/log/messages and /var/log/maillog, among others. Whenever we make changes to the
syslog.conf file, we need to restart the syslogd daemon.

Managing users:

System administrator must manage the users of their system. Administrator can add or remove users,
add and remove groups, modify access rights and permissions for both users and groups. Administrator
also have access to system initialization files, configure all user shells, control over the default
initialization files copied into a user account when it is first created and decides how new user accounts
should be configured initially by configuring these files.

User’s can be more easily managed using a GUI User management tool like GNOME’s useradmin and
KDE’s KUser. GNOME’s user-admin tool is part of GNOME’s system tools package. Though some
distributions like Red Hat still use their own custom designed tools, other distributions like Ubuntu are
making use of GNOME’s user-admin tool. The KUser tool has always been available on all distributions
with the KDE desktop. GNOME’s users-admin tool provides a simple interface for adding, modifying,
and removing users and groups. It opens up with a Users Settings window listing users with their full
name, login name, and home directory. An Add User button to the side will open a New User Accounts
window with Account, User Privileges, and Advanced panels. On the Account panel we can enter the
login name and password. A profile pop-up menu lets us specify whether to make the user a normal
user or an administrator. You can also add in contact information. The User Privileges menu lets us
control what a user can do, most importantly whether to grant administrative access. The Advanced
panel lets you specify the user account settings like the home directory, the login shell to use, and what
group to belong to.

User Configuration Files

Any utility to manage a user makes use of certain default files, called configuration files, and directories
to set up the new account. A set of pathnames is used to locate these default files or to indicate where
to create certain user directories. For example, /etc/skel holds initialization files for a new user. A new
user’s home directory is created in the /home directory. Table 5.5 has a list of the pathnames.

107
TABLE 5.5 Paths for User Configuration Files

TIP You can find out which users are currently logged in with the w or who command. The w command
displays detailed information about each connected user, such as from where they logged in and how
long they have been inactive, and the date and time of login. The who command provides less detailed
data.

The Password Files

A user gains access to an account by providing a correct login and password. The system maintains
passwords in password files, along with login information like the username and ID. Tools like the
passwd command let users change their passwords by modifying these files; /etc/passwd is the file that
traditionally held user passwords, though in encrypted form. However, all users are allowed to read the
/etc/passwd file, which allows access by users to the encrypted passwords. For better security, password
entries are now kept in the /etc/shadow file, which is restricted to the root user.

/etc/passwd

When we add a user, an entry for that user is made in the /etc/passwd file, commonly known as the
password file. Each entry takes up one line that has several fields separated by colons. The fields are as
follows:

• Username Login name of the user


• Password Encrypted password for the user’s account
• User ID Unique number assigned by the system
• Group ID Number used to identify the group to which the user belongs
• Comment Any user information, such as the user’s full name
• Home directory The user’s home directory
• Login shell Shell to run when the user logs in; this is the default shell, usually /bin/bash

Depending on whether or not we are using shadow passwords, the password field (the second field) will
be either an x or an encrypted form of the user's password. Linux implements shadow passwords by
default, so these entries should have an x for their passwords. The following is an example of an
/etc/passwd entry. For such entries, we must use the passwd command to create a password. Notice
also that user IDs in this particular system start at 500 and increment by one. The group given is not the
generic User, but a group consisting uniquely of that user. For example, the dagac user belongs to
a group named dagac, not to the generic User group.
dagac:x:500:500:dagac:/home/dagac:/bin/bash
dagac1:x:501:501:dagac1:/home/dagac1:/bin/bash

108
/etc/shadow and /etc/gshadow

The /etc/passwd file is a simple text file and is vulnerable to security breaches. Anyone who gains
access to the /etc/password file might be able to decipher or crack the encrypted passwords through a
brute-force crack. The shadow suite of applications implements a greater level of security. These
include versions of useradd, groupadd, and their corresponding update and delete programs. Most
other user configuration tools support shadow security measures. With shadow security, passwords are
no longer kept in the /etc/password file. Instead, passwords are kept in a separate file called
/etc/shadow. Access is restricted to the root user.

The following example shows the /etc/passwd entry for a user.


Dagac1:x:501:501:dagac1:/home/dagac1:/bin/bash

A corresponding password file, called /etc/gshadow, is also maintained for groups that require
passwords.

Password Tools

To change any particular field for a given user, we should use the user management tools provided,
such as the passwd command, adduser, usermod, useradd, and chage, discussed in this chapter. The
passwd command lets us change the password only. Other tools not only make entries in the
/etc/passwd file, but also create the home directory for the user and install initialization files in the
user’s home directory. These tools also let us control users’ access to their accounts. We can set
expiration dates for users or lock them out of their accounts. Users locked out of their accounts will
have a their password in the /etc/shadow file prefixed by the invalid string, !!. Unlocking the account
removes this prefix.

Managing User Environments

Each time a user logs in, two profile scripts are executed, a system profile script that is the same for
every user, and a user login profile script that can be customized to each user’s needs. When the user
logs out, a user logout script is run. In addition, each time a shell is generated, including the login shell,
a user shell script is run. There are different kinds of scripts used for different shells. The default shell
commonly used is the BASH shell. As an alternative, users can use different shells such as TCSH or
the Z shell.

Profile Scripts

For the BASH shell, each user has his or her own BASH login profile script named .bash_profile in
the user’s home directory. The system profile script is located in the /etc directory and named profile
with no preceding period. The BASH shell user shell script is called .bashrc. The .bashrc file also runs
the /etc/bashrc file to implement any global definitions such as the PS1 and TERM variables. The
/etc/bashrc file also executes any specialized initialization file in the /etc/profile.d directory, such as
those used for KDE and GNOME. The .bash_profile file runs the .bashrc file, and through it, the
/etc/bashrc file, implementing global definitions.

As a superuser, we can edit any of these profile or shell scripts and put in any commands we want
executed for each user when that user logs in. For example, we may want to define a default path for
commands, in case the user has not done so. Or you may want to notify the user of recent system news
or account changes.

109
Adding and Removing Users with useradd, usermod, and userdel

Linux also provides the useradd, usermod, and userdel commands to manage user accounts. All these
commands take in their information as options on the command line. If an option is not specified, they
use predetermined default values. These are command line operations. To use them on our desktop we
first need to open a terminal window (right-click the desktop and select Open Terminal), and then enter
the commands at the shell prompt. If we are using a desktop interface, we should use GUI tools to
manage user accounts.

Each Linux distribution usually provides a tool to manage users. In addition we can use the K Desktop
KUser tool or the GNOME System Tools User Settings. See Table 5.6 for a listing of user management
tools.

TABLE 5.6 User and Group Management Tools

Useradd

With the useradd command, we enter values as options on the command line, such as the name of a
user, to create a user account. It then creates a new login and directory for that name using all the default
features for a new account.

$ useradd chris

The useradd utility first checks the /etc/login.defs file for default values for creating a new account.
For those defaults not defined in the /etc/login.defs file, useradd supplies its own. We can display these
defaults using the useradd command with the -D option. The default values include the group name,
the user ID, the home directory, the skel directory, and the login shell. Values the user enters on the
command line will override corresponding defaults. The group name is the name of the group in which
the new account is placed. By default, this is other, which means the new account belongs to no group.
The user ID is a number identifying the user account. The skel directory is the system directory that
holds copies of initialization files. These initialization files are copied into the user’s new home
directory when it is created. The login shell is the pathname for the particular shell the user plans to use.

The useradd command has options that correspond to each default value. Table 5.7 holds a list of all
the options we can use with the useradd command. We can use specific values in place of any of these
defaults when creating a particular account. The login is inaccessible until we do. In the next example,
the group name for the dagac1 account is set to dagac and the user ID is set to 578:

110
Table 5.7. Options for useradd and usermod

$ useradd dagac1 -g dagac -u 578

Once we add a new user login, we need to give the new login a password. Password entries are placed
in the /etc/passwd and /etc/shadow files. Use the passwd command to create a new password for the
user, as shown here. The password we enter will not appear on our screen. We will be prompted to
repeat the password. A message will then be issued indicating that the password was successfully
changed.

$ passwd dagac1
Changing password for user dagac1
New password:
Retype new password:
passwd: all authentication tokens updated successfully
$

usermod

The usermod command enables us to change the values for any of these features. we can change the
home directory or the user ID. We can even change the username for the account. The usermod
command takes the same options as useradd, listed previously in Table 5.7.

Userdel

When we want to remove a user from the system, we can use the userdel command to delete the user’s
login. With the -r option, the user’s home directory will also be removed. In the next example, the user
dagac1 is removed from the system:

$ userdel -r dagac1

Managing Groups

We can manage groups using either shell commands or GUI utilities. Groups are an effective way to
manage access and permissions, letting us control several users with just their group name.

111
/etc/group and /etc/gshadow

The system file that holds group entries is called /etc/group. The file consists of group records, with
one record per line and its fields separated by colons. A group record has four fields: a group name, a
password, its ID, and the users who are part of this group. The Password field can be left blank. The
fields for a group record are as follows:
• Group name The name of the group, which must be unique
• Password With shadow security implemented, this field is an x, with the password indicated in the
/etc/gshadow file
• Group ID The number assigned by the system to identify this group
• Users The list of users that belong to the group, separated by commas

Here is an example of an entry in an /etc/group file. The group is called network, the password is
managed by shadow security, the group ID is 100, and the users who are part of this group are dagac,
root, admin, and dagac1:
network:x:100:dagac,root,admin,dagac1

As in the case of the /etc/passwd file, it is best to change group entries using a group management
utility like groupmod or groupadd. All users have read access to the /etc/group file. With shadow
security, secure group data such as passwords are kept in the /etc/gshadow file, to which only the root
user has access.

User Private Groups

A new user can be assigned to a special group set up for just that user and given the user’s name. Thus,
the new user dagac1 is given a default group also called dagac1. The group dagac will also show up
in the listing of groups. This method of assigning default user groups is called the User Private Group
(UPG) scheme. The supplementary groups are additional groups that the user may want to belong to.
Traditionally, users were all assigned to one group named users that subjected all users to the group
permission controls for the users group. With UPG, each user has its own group, with its own group
permissions.

Group Directories

As with users, we can create a home directory for a group. To do so, we simply create a directory for
the group in the /home directory and change its home group to that group and allow access by any
member of the group. The following example creates a directory called engines and changes its group
to the engines group:

mkdir /home/engines
chgrp engines /home/engines

Then the read, write, and execute permissions for the group level should be set with the chmod
command:

chmod g+rwx /home/engines

Any member of the engines group can now access the /home/engines directory and any shared files
placed therein. This directory becomes a shared directory for the group. We can, in fact, use the same
procedure to make other shared directories at any location on the file system. Files within the shared
directory should also have their permissions set to allow access by other users in the group. When a
user places a file in a shared directory, the user needs to set the permissions on that file to allow other
members of the group to access it. A read permission will let others display it, write lets them change
it, and execute lets them run it (used for scripts and programs). The following example first changes the
group for the mymodel file to engines. Then it copies the mymodel file to the /home/engines directory

112
and sets the group read and write permission for the engines group:

$ chgrp engines mymodel


$ cp mymodel /home/engines
$ chmod g+rw /home/engines/mymodel

Managing Groups Using groupadd, groupmod, and groupdel

We can also manage groups with the groupadd, groupmod, and groupdel commands. These command
line operations let you quickly manage a group from a terminal window.

groupadd and groupdel

With the groupadd command, we can create new groups. When we add a group to the system, the
system places the group’s name in the /etc/group file and gives it a group ID number. If shadow security
is in place, changes are made to the /etc/gshadow file. The groupadd command only creates the group
category. We need to add users to the group individually. In the following example, the groupadd
command creates the engines group:

$ groupadd engines

We can delete a group with the groupdel command. In the next example, the engines group is deleted:

$ groupdel engines

groupmod

We can change the name of a group or its ID using the groupmod command. Enter groupmod -g with
the new ID number and the group name. To change the name of a group, we use the -n option. Enter
groupmod -n with the new name of the group, followed by the current name. In the next example, the
engines group has its name changed to trains:

$ groupmod -n trains engines

113

You might also like