APEX INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
Machine Learning (21CSH-286)
Faculty: Prof. (Dr.) Madan Lal Saini(E13485)
Lecture – 29
DISCOVER . LEARN . EMPOWER
Re-inforcement Learning
1
COURSE OBJECTIVES
The Course aims to:
•Understand and apply various data handling and visualization
techniques.
•Understand about some basic learning algorithms and techniques
and their applications, as well as general questions related to
analysing and handling large data sets.
•To develop skills of supervised and unsupervised learning
techniques and implementation of these to solve real life
problems.
•To develop basic knowledge on the machine techniques to build
an intellectual machine for making decisions behalf of humans.
•To develop skills for selecting suitable model parameters and
apply them for designing optimized machine learning applications.
2
COURSE OUTCOMES
On completion of this course, the students shall be able to:-
CO1 Understand machine learning techniques and computing environment
that are suitable for the applications under consideration.
CO2 Understand data pre-processing techniques and apply these for data
cleaning.
CO3 Identify and implement simple learning strategies using data science
and statistics principles.
CO4 Evaluate machine learning model’s performance and apply learning
strategy to improve the performance of supervised and unsupervised
learning model.
CO5 Develop a suitable model for supervised and unsupervised learning
algorithm and optimize the model on the expected accuracy.
3
Unit-3 Syllabus
Unit-3 Unsupervised Learning Contact Hours: 10 Hours
Clustering Types of Clustering: Centroid-based clustering, Density-based
clustering, Distribution-based Clustering and Hierarchical
clustering; K- Means Clustering, KNN (K-Nearest
Neighbours), DBSCAN clustering algorithm; Performance
metrics for clustering: Silhouette Score
Association Rule Apriori algorithm, F-P Growth Algorithm, Applications of
Learning Association Rule Learning, Market Basket Analysis.
Reinforcement Types of Reinforcement learning, Key Features of
Learning Reinforcement Learning, Elements of Reinforcement
Learning, Applications of Reinforcement Learning.
4
SUGGESTIVE READINGS
TEXT BOOKS:
T1: Tom.M.Mitchell, “Machine Learning, McGraw Hill International Edition”.
T2: Ethern Alpaydin,” Introduction to Machine Learning. Eastern Economy Edition, Prentice Hall
of India, 2005”.
T3: Andreas C. Miller, Sarah Guido, Introduction to Machine Learning with Python, O’REILLY
(2001).
REFERENCE BOOKS:
R1 Sebastian Raschka, Vahid Mirjalili, Python Machine Learning, (2014)
R2 Richard O. Duda, Peter E. Hart, David G. Stork, “Pattern Classification, Wiley, 2nd Edition”.
R3 Christopher Bishop, “Pattern Recognition and Machine Learning, illustrated Edition, Springer,
2006”.
5
What is learning?
Learning takes place as a result of interaction
between an agent and the world, the idea
behind learning is that
Percepts received by an agent should be used not
only for acting, but also for improving the agent’s
ability to behave optimally in the future to achieve
the goal.
Learning types
Learning types
Supervised learning:
a situation in which sample (input, output) pairs of the
function to be learned can be perceived or are given
You can think it as if there is a kind teacher
Reinforcement learning:
in the case of the agent acts on its environment, it
receives some evaluation of its action (reinforcement),
but is not told of which action is the correct one to
achieve its goal
Reinforcement learning
Task
Learn how to behave successfully to achieve a
goal while interacting with an external
environment
Learn via experiences!
Examples
Game playing: player knows whether it win or
lose, but not know how to move at each step
Control: a traffic system can measure the delay of
cars, but not know how to decrease it.
RL is learning from interaction
RL model
Each percept(e) is enough to determine the
State(the state is accessible)
The agent can decompose the Reward
component from a percept.
The agent task: to find a optimal policy, mapping
states to actions, that maximize long-run measure
of the reinforcement
Think of reinforcement as reward
Can be modeled as MDP model!
Review of MDP model
MDP model <S,T,A,R>
• S– set of states
Agent • A– set of actions
• T(s,a,s’) = P(s’|s,a)– the
State Action probability of transition from
Reward
s to s’ given action a
Environment • R(s,a)– the expected reward
for taking action a in state s
R ( s, a ) P ( s ' | s, a ) r ( s, a, s ' )
a0 a1 a2 s'
s0 s1 s2 s3 R ( s, a ) T ( s, a, s ' ) r ( s, a, s ' )
r0 r1 r2 s'
Model based v.s.Model free
approaches
But, we don’t know anything about the environment
model—the transition function T(s,a,s’)
Here comes two approaches
Model based approach RL:
learn the model, and use it to derive the optimal policy.
e.g Adaptive dynamic learning(ADP) approach
Model free approach RL:
derive the optimal policy without learning the model.
e.g LMS and Temporal difference approach
Which one is better?
Passive learning v.s. Active
learning
Passive learning
The agent imply watches the world going by and
tries to learn the utilities of being in various states
Active learning
The agent not simply watches, but also acts
Example environment
Passive learning scenario
The agent see the the sequences of state
transitions and associate rewards
The environment generates state transitions and
the agent perceive them
e.g (1,1) (1,2) (1,3) (2,3) (3,3) (4,3)[+1]
(1,1)(1,2) (1,3) (1,2) (1,3) (1,2) (1,1) (2,1)
(3,1) (4,1) (4,2)[-1]
Key idea: updating the utility value using the
given training sequences.
Passive leaning scenario
LMS updating
Reward to go of a state
the sum of the rewards from that state until a
terminal state is reached
Key: use observed reward to go of the state as
the direct evidence of the actual expected utility
of that state
Learning utility function directly from sequence
example
LMS updating
function LMS-UPDATE (U, e, percepts, M, N ) return an updated U
if TERMINAL?[e] then
{ reward-to-go 0
for each ei in percepts (starting from end) do
s = STATE[ei]
reward-to-go reward-to-go + REWARS[ei]
U[s] = RUNNING-AVERAGE (U[s], reward-to-go, N[s])
end
}
function RUNNING-AVERAGE (U[s], reward-to-go, N[s] )
U[s] = [ U[s] * (N[s] – 1) + reward-to-go ] / N[s]
LMS updating algorithm in
passive learning
Drawback:
The actual utility of a state is constrained to be probability- weighted
average of its successor’s utilities.
Converge very slowly to correct utilities values (requires a lot of sequences)
for our example, >1000!
Temporal difference method
in passive learning
TD(0) key idea:
adjust the estimated utility value of the current state based on its
immediately reward and the estimated value of the next state.
The updating rule
U ( s ) U ( s ) ( R ( s ) U ( s ' ) U ( s ))
is the learning rate parameter
Only when is a function that decreases as the number of times
a state has been visited increased, then can U(s)converge to the
correct value.
The TD learning curve
(4,3)
(2,3)
(2,2)
(1,1)
(3,1)
(4,1)
(4,2)
Adaptive dynamic programming(ADP)
in passive learning
Different with LMS and TD method(model free
approaches)
ADP is a model based approach!
The updating rule for passive learning
U ( s ) T ( s, s ' )( r ( s, s ' ) U ( s ' ))
s'
However, in an unknown environment, T is not
given, the agent must learn T itself by experiences
with the environment.
How to learn T?
ADP learning curves
(4,3)
(3,3)
(2,3)
(1,1)
(3,1)
(4,1)
(4,2)
Active learning
An active agent must consider
what actions to take?
what their outcomes maybe(both on learning and receiving the
rewards in the long run)?
Update utility equation
U ( s ) max ( R( s, a ) T ( s, a, s ' )U ( s ' ))
a
s'
Rule to chose action
a arg max ( R( s, a ) T ( s, a, s ' )U ( s ' ))
a s'
Active ADP algorithm
For each s, initialize U(s) , T(s,a,s’) and R(s,a)
Initialize s to current state that is perceived
Loop forever
{
Select an action a and execute it (using current model R and T) using
a arg max ( R ( s, a ) T ( s, a, s ' )U ( s ' ))
a s'
Receive immediate reward r and observe the new state s’
Using the transition tuple <s,a,s’,r> to update model R and T (see further)
For all the s ) smax
U (sate ( R( sU(s)
, update
a
, a ) usingT the
s'
( s, aupdating rule
, s ' )U ( s ' ))
s = s’
}
How to learn model?
Use the transition tuple <s, a, s’, r> to learn T(s,a,s’) and
R(s,a). That’s supervised learning!
Since the agent can get every transition (s, a, s’,r) directly, so take
(s,a)/s’ as an input/output example of the transition probability
function T.
Different techniques in the supervised learning(see further reading
for detail)
Use r and T(s,a,s’) to learn R(s,a)
R ( s, a ) T ( s, a, s ' ) r
s'
ADP approach pros and cons
Pros:
ADP algorithm converges far faster than LMS and Temporal
learning. That is because it use the information from the the model
of the environment.
Cons:
Intractable for large state space
In each step, update U for all states
Improve this by prioritized-sweeping (see further reading for detail)
Another model free method–
TD-Q learning
Define Q-value function
U ( s ) max Q( s, a )
a
Q-value function updating rule
U ( s ) max ( R( s, a ) T ( s, a, s ' )U ( s ' ))
a
s'
Q( s, a ) R ( s, a ) T ( s, a, s ' )U ( s ' )
s'
Q( s, a ) R ( s, a ) T ( s, a, s ' ) maxQ( s ' , a ' )
s'
a' <*>
Key idea of TD-Q learning
Combined with temporal difference approach
The updating rule
Q( s, a ) Q( s, a ) (r max Q( s ' , a ' ) Q( s, a ))
a'
Rule to chose the action to take a arg max Q( s, a )
a
TD-Q learning agent algorithm
For each pair (s, a), initialize Q(s,a)
Observe the current state s
Loop forever
{
Select an action a and execute it
a arg max Q( s, a )
a
Receive immediate reward r and observe the new state s’
Update Q(s,a)
Q( s, a ) Q ( s, a ) (r max Q( s ' , a ' ) Q ( s, a ))
a'
s=s’
}
Exploration problem in Active
learning
An action has two kinds of outcome
Gain rewards on the current experience
tuple (s,a,s’)
Affect the percepts received, and hence
the ability of the agent to learn
Exploration problem in Active
learning
A trade off when choosing action between
its immediately good(reflected in its current utility estimates using the
what we have learned)
its long term good(exploring more about the environment help it to
behave optimally in the long run)
Two extreme approaches
“wacky”approach: acts randomly, in the hope that it will eventually
explore the entire environment.
“greedy”approach: acts to maximize its utility using current model
estimate
See Figure 20.10
Just like human in the real world! People need to decide between
Continuing in a comfortable existence
Or striking out into the unknown in the hopes of discovering a new
and better life
Exploration problem in Active
learning
One kind of solution: the agent should be more wacky when it has
little idea of the environment, and more greedy when it has a
model that is close to being correct
In a given state, the agent should give some weight to actions that it
has not tried very often.
While tend to avoid actions that are believed to be of low utility
Implemented by exploration function f(u,n):
assigning a higher utility estimate to relatively unexplored action state
pairs
Chang the updating rule of value function to
U ( s ) max (r ( s, a ) f ( T ( s, a, s ' )U ( s ' ), N (a, s ))
a
s'
U+ denote the optimistic estimate of the utility
Exploration problem in Active
learning
One kind of definition of f(u,n)
f (u , n) R
u
if n< Ne
otherwise
R is an optimistic estimate of the best possible reward
obtainable in any state
The agent will try each action-state pair(s,a) at least Ne times
The agent will behave initially as if there were wonderful rewards
scattered all over around– optimistic .
Generalization in
Reinforcement Learning
So far we assumed that all the functions learned by
the agent are (U, T, R,Q) are tabular forms—
i.e.. It is possible to enumerate state and action
spaces.
Use generalization techniques to deal with large state
or action space.
Function approximation techniques
Genetic algorithm and Evolutionary
programming
Start with a set of individuals
Apply selection and reproduction operators to “evolve” an individual that is
successful (measured by a fitness function)
Genetic algorithm and Evolutionary
programming
Imagine the individuals as agent functions
Fitness function as performance measure or reward
function
No attempt made to learn the relationship the
rewards and actions taken by an agent
Simply searches directly in the individual space to
find one that maximizes the fitness functions
Genetic algorithm and Evolutionary
programming
Represent an individual as a binary string(each bit of the string is called a gene)
Selection works like this: if individual X scores twice as high as Y on the fitness
function, then X is twice likely to be selected for reproduction than Y is
Reproduction is accomplished by cross-over and mutation
Thank you!