Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views35 pages

Resource Management

The document discusses resource management in distributed systems, focusing on scheduling techniques such as task assignment, load balancing, and load sharing. It outlines characteristics of effective global scheduling algorithms and various policies for load estimation, process transfer, and state information exchange. Additionally, it differentiates between static and dynamic load balancing approaches and highlights the importance of fairness, scalability, and fault tolerance in resource allocation.

Uploaded by

siddhesh.co0456
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views35 pages

Resource Management

The document discusses resource management in distributed systems, focusing on scheduling techniques such as task assignment, load balancing, and load sharing. It outlines characteristics of effective global scheduling algorithms and various policies for load estimation, process transfer, and state information exchange. Additionally, it differentiates between static and dynamic load balancing approaches and highlights the importance of fairness, scalability, and fault tolerance in resource allocation.

Uploaded by

siddhesh.co0456
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

RESOURCE MANAGEMENT

RESOURCE MANAGEMENT

 Resource can be a logical resource(shared file) or a physical resource


(CPU)
 An important characteristic of a DOS - resource usage, response time,
network congestion, and scheduling overhead are optimized
 Processes are assigned to the nodes accordingly
RESOURCE MANAGEMENT

Scheduling techniques in DS
 Task assignment approach
 Load balancing approach
 Load sharing approach
SCHEDULING TECHNIQUES IN DS

 Task assignment approach- process is viewed as a collection of related tasks and


these tasks are scheduled to suitable nodes so as to improve performance
 Load-balancing approach- processes are distributed among the nodes of the system
so as to equalize the workload among the nodes
 Load-sharing approach- work by assuring that no node is idle while processes wait
for being processed
DESIMILE FEATURES OF A GOOD GLOBAL SCHEDUUNG
ALGORITHM
 No A Priori Knowledge about the Processes- should operate with absolutely no apriori knowledge
about the processes to be executed
 Dynamic In Nature - should be able to take care of the dynamically changing load (or status) of the
various nodes of the system
 Quick Decision-Making Capability - must make quick decisions
 Balanced System Performance and Scheduling Overhead
 Stability - fruitless migration of processes is known as processor thrashing
 Scalability -A scheduling algorithm should be capable of handling small as well as large networks. An
algorithm that makes scheduling decisions by first inquiring the workload from all the nodes and then
selecting the most lightly loaded node as the candidate for receiving the process(es) has poor
scalability factor.
 Fault Tolerance - At any instance of time, it should continue functioning with nodes that are up at that
time
 Fairness of Service
TASK ASSIGNMENT APPROACH: ASSUMPTIONS

 A process has already been split into pieces called tasks. This split occurs along natural
boundaries, data transfers among the tasks will be minimized.
 The amount of computation required by each task and the speed of each processor are
known.
 The cost of processing each task on every node of the system is known.
 The interprocess communication (IPC) costs between every pair of tasks is known.
 Other constraints, such as resource requirements of the tasks and the available
resources at each node, precedence relationships among the tasks, and so on, are also
known.
TASK ASSIGNMENT APPROACH: EXAMPLE
TASK ASSIGNMENT APPROACH: EXAMPLE
TASK ASSIGNMENT APPROACH: EXAMPLE
TASK ASSIGNMENT APPROACH: EXAMPLE
TASK ASSIGNMENT APPROACH: EXAMPLE

 Nodes nI and n2 represent the two nodes (processors) of the distributed system and
 Nodes t1 through t6 represent the tasks of the process.
 The weights of the edges joining pairs of task nodes represent intertask
communication costs.
 The weight on the edge joining a task node to node n represents the execution cost
of that task on node n.
TASK ASSIGNMENT APPROACH: EXAMPLE

 A cutset in this graph is defined to be a set of edges such that when these edges are
removed, the nodes of the graph are partitioned into two disjoint subsets such that
the nodes in one subset are reachable from nI and the nodes in the other are
reachable from n2.
TASK ASSIGNMENT APPROACH: EXAMPLE

 The weight of a cutset is the sum of the weights of the edges in the cutset.
 It represents the cost of the corresponding task assignment since the weight of a
cutset sums up the execution and communication costs for that assignment.
 An optimal assignment may be obtained by finding a minimum-weight cutset (by
following algorithms)
LOAD BALANCING: TAXANOMY
LOAD BALANCING

 Static load balancing distributes the workload equally among resources without
taking into account their current usage or capacity.
 Dynamic load balancing, on the other hand, takes into account the current usage and
capacity of the resources and redistributes the workload accordingly.
LOAD BALANCING

 Deterministic load balancing refers to a load balancing approach that uses a fixed
algorithm to determine how to distribute the workload.
 The decision on which resource to use for a particular task is based on a
predetermined set of rules, which can include factors such as resource availability,
resource utilization, and resource capacity.
 Deterministic load balancing algorithms are typically deterministic in nature,
 Given the same input, they will always produce the same output.
 Example of deterministic load balancing is the round-robin algorithm
LOAD BALANCING

 Probabilistic load balancing is a load balancing approach that uses probability-based


algorithms to determine how to distribute the workload.
 The decision on which resource to use for a particular task is based on a probability
distribution that takes into account factors such as resource utilization, resource
capacity, and network latency
LOAD BALANCING

 In noncooperative algorithms, individual entities act as autonomous entities


and make scheduling decisions independently of the actions of other entities.
 In cooperative algorithms, the distributed entities cooperate with each other
to make scheduling decisions.
ISSUES IN DESIGNING LOAD-BALANCING ALGORITHMS
 Load estimation policy- which determines how to estimate the workload of a particular
node of the system
 Process transfer policy- which determines whether to execute a process locally or
remotely
 State information exchange policy- which determines how to exchange the system load
information among the nodes
 Location policy - which determines to which node a process selected for transfer should
be sent
 Priority assignment policy - which determines the priority of execution of local and remote
processes at a particular node
 Migration limiting policy - which determines the total number of times a process can
migrate from one node to another
LOAD ESTIMATION POLICIES
 Imp question - how to measure the workload of a particular node?
Some measurable parameters
 Total number of processes on the node at the time of load estimation
 Resource demands of these processes
 Instruction mixes of these processes
 Architecture and speed of the node's processor
Methods
 Total number of processes present on the node
 Sum of the remaining service times
 Modern DS - CPU utilization
PROCESS TRANSFER POLICIES
 Imp quest - to decide whether a node is lightly or heavily loaded?
 Solution - Threshold policy
Methods to determine threshold:
 Static policy
 Each node has a predefined threshold value depending on its processing capability
 No exchange of stale information among the nodes is required
 Dynamic policy
 The threshold value of a node (nj) is calculated as a product of the average workload of all
the nodes and a predefined constant (c;)
 For each node n., the value of c, depends on the processing capability of node n, relative
to the processing capability of an other nodes.
PROCESS TRANSFER POLICIES

 Drawback – the load of a process may become larger than the threshold as soon as
the remote process arrives.
Solution - double-threshold policy called the high-low policy.
 When the load of the node is in the overloaded region, new local processes run
remotely and requests to accept remote processes are rejected.
 When the load of the node is in the normal region, new local processes run locally
and requests to accept remote processes are rejected.
 When the load of the node is in the under-loaded region, new local processes run
locally and requests to accept remote processes are accepted
PROCESS TRANSFER POLICIES
LOCATION POLICIES

 Next step is to select the destination node for that process's execution.
1. Threshold
 A destination node is selected at random and a check is made
 If the check indicates that the selected node cannot accept remote processes,
another node is selected at random
 This continues until either a suitable destination node is found or the number of
probes exceeds a static probe limit Lp
LOCATION POLICIES

2. Shortest - In this method, Lp distinct nodes are chosen at random, and each is polled
in turn to determine its load. The process is transferred to the node having the
minimum load value
 More state information, in a more complex manner
3. Bidding
 The system is turned into a distributed computational economy with buyers and
sellers of services
 Each node in the network is responsible for two roles:
 manager
 contractor-represents a node that is able to accept remote processes.
LOCATION POLICIES

 The manager broadcasts a request-for-bids message to all other nodes in the system.
 Upon receiving this message, the contractor nodes return bids to the manager node.
 The bids contain the quoted prices, which vary based on the processing capability,
memory size, resource availability, and so on
 Of the bids received from the contractor nodes, the manager node chooses the best
bid.
 The best bid for a manager's request may mean the cheapest, fastest, or best price-
performance
 But it is possible that a contractor node may simultaneously win many bids from
many other manager nodes and thus become overloaded.
LOCATION POLICIES

4. Pairing - reduce the variance of loads only between pairs of nodes of


the system
 Two nodes that differ greatly in load are temporarily paired with each
other
 Migrate one or more processes from the more heavily loaded node to
the other node
STATE INFORMATION EXCHANGE POLICIES

1. Periodic Broadcast- each node broadcasts its state information after the elapse of
every t units of time.
 Drawback:
 It generates heavy network traffic
 There is a possibility of fruitless messages
2. Broadcast When State Changes
 A node broadcasts its state information only when the state of the node changes.
 Refined - a node broadcasts its state information only when its state switches from
the normal load region to either the under loaded region or the overloaded region.
STATE INFORMATION EXCHANGE POLICIES

3. On-Demand Exchange
 A node broadcasts a State lnformation request message when its state switches from
the normal load region to either the under-loaded region or the overloaded region.
 On receiving this message, other nodes send their current state to the requesting
node.
4. Exchange by Polling
 No broadcasting like previous methods
 When a node needs the cooperation of some other node for load balancing, it can
search for a suitable partner by randomly polling the other nodes one by one
PRIORITY ASSIGNMENT POLICIES

1. Selfish - Local processes are given higher priority than remote processes.
2. Altruistic - Remote processes are given higher priority than local processes.
3. Intermediate. The priority of processes depends on the number of local processes
and the number of remote processes at the concerned node.
 If the number of local processes is greater than or equal to the number of remote
processes, local processes are given higher priority than remote processes.
 Otherwise, remote processes are given higher priority than local processes.
MIGRATION-LIMITING POLICIES

 Total number of times a process should be allowed to migrate.


 Uncontrolled - a process may be migrated any number of times
 Controlled - use a migration count parameter to fix a limit on the
number of times that a process may migrate.
LOAD SHARING APPROACH

 The priority assignment policies and the migration limiting policies for load-sharing
algorithms are the same as that for the load-balancing
 Load Estimation Policies
 Total number of processes on a node
 Measuring CPU utilization
PROCESS TRANSFER POLICIES

 This strategy uses the single threshold policy with the threshold value of all the
nodes fixed at 1.
 A node becomes a candidate for accepting a remote process only when it has no
process, and a node becomes a candidate for transferring a process as soon as it has
more than one process.
LOCATION POLICIES

 Sender-initiated policy - the sender node of the process decides where


to send the process
 heavily loaded nodes search for lightly loaded nodes to which work may be
transferred.
 Either broadcasts a message or randomly probe the other nodes one by one

 Receiver-initiated policy - the receiver node of the process decides from


where to get the process
STATE INFORMATION EXCHANGE POLICIES

 Broadcast When State Changes


 A node broadcasts a StatelnformationRequest message when it becomes either
under-loaded or overloaded.
 Poll When State Changes
 Broadcast protocol is unsuitable for large networks,
 When a node's state changes, it randomly polls the other nodes one by one and
exchanges state information with the polled nodes.
 Stop when a suitable node has been found or the number of probes has reached
the probe limit.

You might also like