Operating
Systems: Chapter 16
Internals Distributed Processing,
and
Design Client/Server, and
Principles Clusters
Eighth Edition
By William Stallings
Table 16.1
Client/Server Terminology
Applications Programming Interface (API)
A set of function and call programs that allow clients and servers to intercommunicate
Client
A networked information requester, usually a PC or workstation, that can query database
and/or other information from a server
Middleware
A set of drivers, APIs, or other software that improves connectivity between a client
application and a server
Relational Database
A database in which information access is limited to the selection of rows that satisfy all
search criteria
Server
A computer, usually a high-powered workstation, a minicomputer, or a mainframe, that
houses information for manipulation by networked clients
Structured Query Language (SQL)
A language developed by IBM and standardized by ANSI for addressing, creating,
updating, or querying relational databases
Client/Server Computing:
Client
Client machines are generally single-user workstations
providing a user-friendly interface to the end user
client-based stations generally present the type of graphical
interface that is most comfortable to users, including the use
of windows and a mouse
Microsoft Windows and Macintosh OS provide examples of
such interfaces
client-based applications are tailored for
ease of use and include such familiar
tools as the spreadsheet
Client/Server Computing:
Server
Each server provides a set of shared services to the clients
most common type of server currently is the database server,
usually controlling a relational database
enables many clients to share access to the same database
enables the use of a high-performance computer system to
manage the database
LAN or WAN
or Internet
server
workstation
(client)
Figure 16.1 Generic Client/Server Environment
Client/Server Characteristics
A client/server configuration differs from other types of
distributed processing:
there is a heavy reliance on bringing user-friendly
applications to the user on his or her own system
there is an emphasis on centralizing corporate databases and
many network management and utility functions
there is a commitment, both by user organizations and
vendors, to open and modular systems
networking is fundamental to the operation
Client Workstation
Presentation services
Server
request
Application logic Application logic
(client portion) (server portion)
response
Communications Communications
software protocol software
interaction
Client Server
operating system operating system
Hardware platform Hardware platform
Figure 16.2 Generic Client/Server Architecture
Client/Server Applications
The key feature of a client/server architecture is the
allocation of application-level tasks between clients and
servers
Hardware and the operating systems of client and server
may differ
These lower-level differences are irrelevant as long as a
client and server share the same communications protocols
and support the same applications
Client/Server Applications
It is the communications software that enables client and server to
interoperate
principal example is TCP/IP
Actual functions performed by the application can be split up between
client and server in a way that optimizes the use of resources
The design of the user interface on the client machine is critical
there is heavy emphasis on providing a graphical user interface
(GUI) that is easy to use, easy to learn, yet powerful and
flexible
Client Workstation
Presentation services
Application logic
Server
request
Database logic Database logic
response
Communications
software protocol Communications Database management
interaction software system
Client Server operating system
operating system
Hardware platform
Hardware platform
Figure 16.3 Client/Server Architecture for Database Applications
(a) Desirable client/server use
Server
Server
Initial query
Client
100,000 possible
Query records
Client
Next query
300,000 records returned
1,000 possible records
Final query
1,000,000
1,000,000
One record returned record
record
database
database
(a)(b)Desirable
Misusedclient/server
client/server
use
Figure 16.4 Client/Server DatabaseServer
Usage
Client
(a) Desirable client/server use
Server
Client
Query
300,000 records returned
1,000,000
record
database
(b) Misused client/server
Figure 16.4 Client/Server Database Usage
Classes of
Client/Server Applications
Host-based Server-based
processing processing
Four general
classes are:
Cooperative Client-based
processing processing
Client Server
Presentation logic
Application logic
Database logic
DBMS
(a) Host-based processing
Not true client/server computing
Traditional mainframe environment in which all or virtually all of the
processing is done on a central host
Presentation logic
Often the user interface is via a dumb terminal
Application
The user’s station is generally limited to the role of a terminal logic
emulator
Presentation logic
Application logic
Database logic
DBMS
(b) Server-based processing
Server does all the processing
Client provides a graphical user interface
Rationale behind configuration is that the user workstation is best suited to
Presentation logic interface and that databases and applications can
providing a user-friendly
easily be maintained on central systems
Application logic
User gains the advantage of a better interface
Application logic
Database logic
Presentation logic
Application logic Application logic
Database logic
DBMS
(c) Cooperative processing
Application processing is performed in an optimized
fashion logic
Presentation
Complex to set up and maintain
Application logicproductivity and efficiency
Offers greater
Database logic Database logic
Presentation logic
Application logic
Database logic Database logic
DBMS
(d) Client-based processing
All application processing is done at the client
igure 16.5 Classes of Client/Server Applications
Data validation routines and other database logic functions are done at the
server
Some of the more sophisticated database logic functions are housed on the
client side
This architecture is perhaps the most common client/server approach in
current use
It enables the user to employ applications tailored to local needs
Client
Middle-tier server
(application server}
Back-end servers
(data servers)
Figure 16.6 Three-tier Client/Server Architecture
File Cache Consistency
When a file server is used, performance of file I/O can be
noticeably degraded relative to local file access because of
the delays imposed by the network
File caches hold recently accessed file records
Because of the principle of locality, use of a local file
cache should reduce the number of remote server
accesses that must be made
The simplest approach to cache consistency is to use file
locking techniques to prevent simultaneous access to a file
by more than one client
Network
File Server Server File
Traffic Client Traffic Server Traffic Client Traffic
Cache Cache Cache
Disk Disk
Traffic Traffic
Server Client
Disk Disk
Figure 16.7 Distributed File Cacheing in Sprite
Middleware
To achieve the true benefits of the client/server approach
developers must have a set of tools that provide a uniform
means and style of access to system resources across all
platforms
This would enable programmers to build applications that
look and feel the same
Enable programmers to use the same method to access
data regardless of the location of that data
The way to meet this requirement is by the use of
standard programming interfaces and protocols that sit
between the application (above) and communications
software and operating system (below)
Client Workstation
Presentation services
Application logic
Server
Middleware middleware Middleware
interaction
Communications
software protocol Communications Application
interaction software services
Client Server operating system
operating system
Hardware platform
Hardware platform
Figure 16.8 The Role of Middleware in Client/Server Architecture
Client Server
Application Application
Message-Oriented Message-Oriented
Middleware Middleware
(with message queue) Application-specific (with message queue)
messages
Transport Transport
Network Network
(a) Message-Oriented Middleware
Client Server
Application RPC RPC Application
stub stub
program program
Application-specific
Transport procedure invocations Transport
Network and returns Network
(b) Remote Procedure Calls
Client Server
Application Object Object
RPC request server
stub broker
Object requests Object requests
program
and responses and responses
Transport Transport Transport
Network Network Network
(c) Object request broker
Figure 16.10 Middleware Mechanisms
Sending Receiving
process process
Message-passing Message-passing
module module
ProcessId Message
Figure 16.11 Basic Message-Passing Primitives
Reliability versus
Unreliability
Reliable message-passing guarantees delivery if possible
not necessary to let the sending process know that the message was
delivered (but useful)
If delivery fails, the sending process is notified of the failure
At the other extreme, the message-passing facility may simply send the
message out into the communications network but will report neither success
nor failure
this alternative greatly reduces the complexity and processing and
communications overhead of the message-passing facility
For those applications that require confirmation that a message has been
delivered, the applications themselves may use request and reply messages to
satisfy the requirement
Blocking versus
Nonblocking
Nonblocking Blocking
• process is not suspended as a result of • the alternative is to use blocking, or
issuing a Send or Receive synchronous, primitives
• efficient, flexible use of the message • Send does not return control to the
passing facility by processes sending process until the message has
• difficult to test and debug programs that been transmitted or until the message
use these primitives has been sent and an acknowledgment
• irreproducible, timing-dependent received
sequences can create subtle and difficult • Receive does not return control until a
problems message has been placed in the allocated
buffer
Remote Procedure Calls
Allow programs on different machines to interact using simple
procedure call/return semantics
Used for access to remote services
Widely accepted and common method for encapsulating
communication in a distributed system
• the communication code for an
application can be generated
automatically
• client and server modules can be
Standardized moved among computers and
operating systems with little
modification and recoding
Client Remote server
application application
Local Local
response response
Local Local
procedure Local
response procedure
calls call
Local application Local stub Local stub
or Remote procedure call
operating system RPC RPC
mechanism mechanism
Remote procedure call
Figure 16.12 Remote Procedure Call Mechanism
Parameter Passing/
Parameter Representation
Passing a parameter by value is easy with RPC
Passing by reference is more difficult
a unique system wide pointer is necessary
the overhead for this capability may not be worth the effort
The representation/format of the parameter and message may be
difficult if the programming languages differ between client and server
Client/Server Binding
Nonpersistent Binding Persistent Binding
A binding is formed when two A connection that is set up for a
applications have made a logical remote procedure call is sustained
connection and are prepared to after the procedure return
exchange commands and data
The connection can then be used
Nonpersistent binding means that a for future remote procedure calls
logical connection is established
between the two processes at the time If a specified period of time passes
of the remote procedure call and that with no activity on the connection,
as soon as the values are returned, the then the connection is terminated
connection is dismantled
For applications that make many
The overhead involved in establishing repeated calls to remote
connections makes nonpersistent procedures, persistent binding
binding inappropriate for remote maintains the logical connection
procedures that are called frequently by and allows a sequence of calls and
the same caller returns to use the same connection
Synchronous versus
Asynchronous
Synchronous RPC Asynchronous RPC
• behaves much like a subroutine call • does not block the caller
• behavior is predictable • replies can be received as and when
• however, it fails to exploit fully the they are needed
parallelism inherent in distributed • allow client execution to proceed
applications locally in parallel with server
• this limits the kind of interaction the invocation
distributed application can have,
resulting in lower performance
Object-Oriented
Mechanisms
Clients and servers ship messages back and forth between objects
A client that needs a service sends a request to an object broker
The broker calls the appropriate object and passes along any relevant
data
The remote object services the request and replies to the broker, which
returns the response to the client
The success of the object-oriented approach depends on standardization
of the object mechanism
Examples include Microsoft’s COM and CORBA
Clusters
Alternative to symmetric multiprocessing (SMP) as an approach to
providing high performance and high availability
Group of interconnected, whole computers working together as a
unified computing resource that can create the illusion of being one
machine
Whole computer means a system that can run on
its own, apart from the cluster
Each computer in a cluster is referred to as a node
Benefits of Clusters
Incremental Superior
Absolute scalability High availability
scalability price/performance
by using
commodity
configured in
a cluster can building
such a way
have dozens or blocks, it is
that it is failure of one
even hundreds possible to put
possible to add node is not
of machines, together a
new systems critical to
each of which cluster at a
to the cluster system
is a much lower
in small
multiprocessor cost than a
increments
single large
machine
P P P P
High-speed message link
M I/O I/O I/O I/O M
(a) Standby server with no shared disk
High-speed message link
P P I/O I/O P P
M I/O I/O I/O I/O M
RAID
(b) Shared disk
Figure 16.13 Cluster Configurations
Clustering Method Description Benefits Limitations
Passive Standby A secondary server Easy to implement. High cost because the
takes over in case of secondary server is
primary server failure. unavailable for other
processing tasks.
Active Secondary The secondary server Reduced cost because Increased complexity.
is also used for secondary servers can
Table processing tasks. be used for
processing.
16.2 Separate Servers Separate servers have High availability.
their own disks. Data
High network and
server overhead due
is continuously copied to copying operations.
from primary to
Clustering secondary server.
Methods: Servers Connected Servers are cabled to Reduced network and Usually requires disk
to Disks the same disks, but server overhead due mirroring or RAID
Benefits each server owns its to elimination of technology to
and disks. If one server copying operations. compensate for risk of
fails, its disks are disk failure.
Limitations taken over by the
other server.
Servers Share Multiple servers Low network and Requires lock
Disks simultaneously share server overhead. manager software.
access to disks. Reduced risk of Usually used with
downtime caused by disk mirroring or
disk failure. RAID technology.
Operating System Design Issues
Failure Management
Two approaches can be taken to deal with failures:
Fault-
Highly tolerant
available clusters
offers a high
probability that
clusters
all resources will if a failure
be in service occurs, the
queries in
progress are
lost achieved by the use
of redundant shared
disks and ensures that
mechanisms for all resources
any lost query, if the cluster operating backing out are always
retried, will be system makes no uncommitted available
serviced by a guarantee about the transactions and
different computer state of partially committing
in the cluster executed completed
transactions transactions
Operating System Design Issues
Failure Management
The function of switching an application and data resources
over from a failed system to an alternative system in the
cluster is referred to as fallover
The restoration of applications and data resources to the
original system once it has been fixed is referred to as
fallback
Fallback can be automated but this is desirable only if the
problem is truly fixed and unlikely to recur
Automatic failback can cause subsequently failed resources
to bounce back and forth between computers, resulting in
performance and recovery problems
Load Balancing
A cluster requires an effective capability for balancing the load among
available computers
This includes the requirement that the cluster be incrementally scalable
When a new computer is added to the cluster, the load-balancing facility
should automatically include this computer in scheduling applications
Middleware must recognize that services can appear on different
members of the cluster and may migrate from one member to another
Parallelizing Computation
Parallelizing compiler
• determines, at compile time, which parts of an application can be executed in parallel
• performance depends on the nature of the problem and how well the compiler is
designed
Parallelized application
• the programmer writes the application from the outset to run on a cluster and uses
message passing to move data, as required, between cluster nodes
• this places a high burden on the programmer but may be the best approach for exploiting
clusters for some applications
Parametric computing
• this approach can be used if the essence of the application is an algorithm or program
that must be executed a large number of times, each time with a different set of starting
conditions or parameters
• for this approach to be effective, parametric processing tools are needed to organize, run,
and manage the jobs in an orderly manner
Parallel Applications
Sequential Applications Parallel Programming Environment
Cluster Middleware
(Single System Image and Availability Infrastructure)
PC/Workstation PC/Workstation PC/Workstation PC/Workstation PC/Workstation
Comm SW Comm SW Comm SW Comm SW Comm SW
Net. Interface HW Net. Interface HW Net. Interface HW Net. Interface HW Net. Interface HW
High Speed Network/Switch
Figure 16.14 Cluster Computer Architecture [BUYY99a]
Clusters Compared to SMP
Both clusters and SMP provide a configuration with multiple processors
to support high-demand applications
Both solutions are commercially available
SMP has been around longer
SMP is easier to manage and configure
SMP takes up less physical space and draws less power than a
comparable cluster
SMP products are well established and stable
Clusters are better for incremental and absolute scalability
Clusters are superior in terms of availability
Windows Cluster Server
Windows Failover Clustering is a shared-nothing cluster in which each
disk volume and other resources are owned by a single system at a time
The Windows cluster design makes use of the
following concepts:
• cluster service – the collection of software on each
node that manages all cluster-specific activity
• resource – an item managed by the cluster service
• online – a resource is said to be online at a node
when it is providing service on that specific node
• group – a collection of resources managed as a
single unit
Group
Combines resources into larger units that are easily managed
both for failover and load balancing
Operations performed on a group automatically affect all of the
resources in that group
Resources are implemented as DLLs
managed by a resource monitor
Resource monitor interacts with the cluster service via remote procedure
calls and responds to cluster service commands to configure and move
resource groups
Cluster Management Tools
Cluster API DLL
RPC
Cluster
Service
Global Update
Database Manager
Manager Node
Event Processor Manager
App Failover Mgr Communication Other
Resource Manager Nodes
Resource Mgr
DLL
Resource Monitors
Resource
Management
Interface
Physical Logical App
Resource Resource Resource Non-aware
DLL DLL DLL App
Cluster-aware
App
Figure 16.15 Windows Cluster Server Block Diagram [SHOR97]
Windows Clustering
Components
Configuration Database Resource Manager/Failover
Node Manager Event Processor
Manager Manager
makes all decisions connects all of the
responsible for components of the
maintaining this node’s regarding resource
maintains the cluster cluster service, handles
membership in the groups and initiates
configuration database appropriate actions common operations,
cluster and controls cluster
service initialization
Beowulf and Linux Clusters
Beowulf project:
was initiated in 1994 under the sponsorship of the
NASA High Performance Computing and
Communications (HPPC) project
goal was to investigate the potential of clustered PCs
for performing important computation tasks beyond
the capabilities of contemporary workstations at
minimum cost
is widely implemented and is perhaps the most
important cluster technology available
Beowulf Features
Mass market commodity items
Dedicated processors and network
A dedicated, private network
No custom components
Easy replication from multiple vendors
Scalable I/O
A freely available software base
Use of freely available distribution computing tools with minimal changes
Return of the design and improvements to the community
Distributed
shared storage
Linux
workstations
Ethernet or
Interconected Ethernets
Figure 16.16 Generic Beowulf Configuration
Beowulf System Software
Is implemented as an add-on to commercially available, royalty-
free base Linux distributions
Each node in the Beowulf cluster runs its own copy of the Linux
kernel and can function as an autonomous Linux sysetm
Examples of Beowulf system software:
Beowulf Beowulf
distributed Ethernet
Pvmsync EnFuzion
process space Channel
(BPROC) Bonding
Summary
Client/server computing Remote procedure calls
What is client/server Parameter passing
computing? Parameter representation
Client/server applications Client/server binding
Middleware Synchronous versus
asynchronous
Distributed message passing
Object-oriented mechanisms
Reliability versus unreliability
Blocking versus nonblocking Clusters
Cluster configurations
Windows cluster server
Operating system design issues
Beowulf and Linux clusters Cluster computer architecture
Beowulf features Clusters compared to SMP
Beowulf software