Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
79 views10 pages

19MAT301 - Practice Sheet 2 & 3

This document contains information about Markov models including: 1. The steady state distribution of a Markov chain is calculated using the Markov chain matrix. Examples of calculating the steady state distribution and eigenvectors are shown. 2. Generating random state sequences from a Markov model by choosing initial states randomly and then transitioning between states probabilistically based on the transition matrix. The estimated transition matrix is calculated from the generated sequence. 3. Generating observation sequences from a Hidden Markov Model by choosing initial states, emitting observations based on emission probabilities, and transitioning between states based on the transition matrix. The emission probabilities are re-estimated from the generated data. 4. Calculating the joint probability of an observation sequence

Uploaded by

Vishal C
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views10 pages

19MAT301 - Practice Sheet 2 & 3

This document contains information about Markov models including: 1. The steady state distribution of a Markov chain is calculated using the Markov chain matrix. Examples of calculating the steady state distribution and eigenvectors are shown. 2. Generating random state sequences from a Markov model by choosing initial states randomly and then transitioning between states probabilistically based on the transition matrix. The estimated transition matrix is calculated from the generated sequence. 3. Generating observation sequences from a Hidden Markov Model by choosing initial states, emitting observations based on emission probabilities, and transitioning between states based on the transition matrix. The emission probabilities are re-estimated from the generated data. 4. Calculating the joint probability of an observation sequence

Uploaded by

Vishal C
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

19MAT301 – Mathematics for Intelligent Systems – 5

Practice Sheet – 2 & 3

Markov Models
1. Steady state Distribution of a Markov Chain (with Markov Chain matrix properties)
% Markov matrix- A square matrix
% Markov chain matrix property
clear all;clc
A=[0.8 0.1 0.1; 0.2 0.6 0.2; 0.1 0.2 0.7];
[m,n]=size(A);
A = A';
Ainf=A^50;
display(Ainf)
e1=Ainf(:,1);
% show that column of Ainf is eigen vector for lamda=1
disp('A*first column of Ainf')
display(A*e1)
[eigvec eigval]=eig(A);
column_sum=sum(abs(eigvec));
divisor=repmat(column_sum,m,1);
sumnorm_eigvec=eigvec./divisor;
disp('All sum_normalized eigen vector of A')
disp(sumnorm_eigvec)
disp('All eigen values of A')
disp(diag(eigval))

Practice the Questions whichever is there in the class notes:


1) A salesman’s territory consists of three cities A, B and C. He never sells in the same city
on successive days. If he sells in A, then the next day he sells in city B. However, if he
sells in either B or C, then the next day he is twice as likely to sell in city A as in the other
city. In the long run, how often does he sell in each of the cities?

2)
3) The three- state Markov chain is given by the transition probability matrix P=
𝟎 𝟐/𝟑 𝟏/𝟑
[𝟏/𝟐 𝟎 𝟏/𝟐] . Find the steady state distribution of the chain.
𝟏/𝟐 𝟏/𝟐 𝟎

4) Consider the following weather sequence of the three consecutive days


(i) Determine the steady state distribution of the weather sequence. Also draw the
transition diagram.
(ii)

(iii)

(iv)

5) State transition matrix: the probability of the weather today given yesterday’s weather
Weather today
Sunny Cloudy Rainy
Weather
Sunny 0.5 0.2 0.3
Yesterday
Cloudy 0.1 0.6 0.3
Rainy 0.2 0.4 0.4
Determine the steady state distribution and also draw the transition diagram.

2. Generation of random state sequences


%Markov model
%Generate state sequence
clc;clear all;
T=zeros(3,3);
T(1,1)=0.8;T(1,2)=0.1;T(1,3)=0.1;
T(2,1)=0.2;T(2,2)=0.6;T(2,3)=0.2;
T(3,1)=0.15;T(3,2)=0.15;T(3,3)=0.7;
s(1)=randi(3,1); % randomly choose a state
n=1000;
for i=1:n-1
cs=s(i); % current state
if cs==1
r=rand();
if r<T(1,1)
s(i+1)=1;
elseif r<T(1,1)+T(1,2)
s(i+1)=2;
else
s(i+1)=3;
end
elseif cs==2
r=rand();
if r<T(2,1)
s(i+1)=1;
elseif r<T(2,1)+T(2,2)
s(i+1)=2;
else
s(i+1)=3;
end
else
r=rand();
if r<T(3,1)
s(i+1)=1;
elseif r<T(3,1)+T(3,2)
s(i+1)=2;
else
s(i+1)=3;
end
end
end
disp('Part of state sequence')
display(s(1:12))
% Counting transition
A=zeros(3,3);
n=length(s);
for i=1:n-1
k=s(i);
l=s(i+1);
A(k,l)=A(k,l)+1;
end
disp('Estimated transition probabilities')
rowsum=sum(A,2);
div=repmat(rowsum,[1,3]);
A=A./div;
disp(A)
disp('Original transition probabilities')
disp(T)

Problems for practise :


1) Three Boys A, B, C are throwing a ball each other. A always throws the ball to B and B always throws the
ball to C but C is just as likely to throw the ball to B as to A. Show the process is Markovian. Find the
transition probability matrix and generate 700 the state sequence. Also reestimate the parameters.
2) Consider the random experiment of rolling a die once. Generate 800 random state sequences for the
appearance on the die and reestimate the parameters. (you can determine the Markov chain with order 2 on
your own).
3) In the case of tossing a coin once, generate 2000 random state sequences for the appearance on the die.
Display any subset of the sequence with length 30 and reestimate its parameters. (you can determine the
Markov chain on your own).

3. Generate Symbol and State sequences


%HMM data Generation
clear all;
T=[0.5 0.5; 0.4 0.6]
state1.h=0.7;
state1.t=0.3;
state2.h=0.6;
state2.t=0.4;

x=rand();
if x<0.5 start_state=1
else
start_state=2
end
current_state=start_state
s(1)=current_state;
Obs='';
for i=1:20
if current_state==1
x=rand();
if x<state1.h
Obs(i)='H';
else
Obs(i)='T';
end
x=rand();
if x<T(1,1)
current_state=1;
else
current_state=2;
end
else
x=rand();
if x<state2.h
Obs(i)='H';
else
Obs(i)='T';
end
x=rand();
if x<T(2,2)
current_state=2;
else
current_state=1;
end
end
s(end+1)=current_state;
end
Obs
s=s(1:end-1)
s=s(1:end-1);
stat1.CH=0;
stat1.CT=0;
stat2.CH=0;
stat2.CT=0;
for i=1:length(s)
e=Obs(i);
if and(s(i)==1, e=='H')
stat1.CH=stat1.CH+1;
elseif and(s(i)==1, e=='T')
stat1.CT=stat1.CT+1;
elseif and(s(i)==2, e=='H')
stat2.CH=stat2.CH+1;
else
stat2.CT=stat2.CT+1;
end
end
total=stat1.CH+stat1.CT+stat2.CH+stat2.CT
e1H=stat1.CH/(stat1.CH+stat1.CT)
e1T=stat1.CT/(stat1.CH+stat1.CT)
e2H=stat2.CH/(stat2.CH+stat2.CT)
e2T=stat2.CT/(stat2.CH+stat2.CT)
%% Reestimation of the parameters
% Counting transition
A=zeros(2,2)
n=length(s)
for i=1:n-1
k=s(i);
l=s(i+1);
A(k,l)=A(k,l)+1;
end
% Estimated transition probabilities
rowsum=sum(A,2);
div=repmat(rowsum,[1,2]);
A=A./div

Problems for practise:


1) Generate observation sequence and state sequence of the climate problem which we
have discussed in the class. Also re estimate the parameters.
2) Generate observation sequence and state sequence of the climate problem (which I
already sent you in teams). Also re estimate the parameters.
3) Generate 5 observation of the model which we have discussed today’s class.
4) Generate 6 observation of the model which we have discussed today’s class. Also re
estimate the parameters.
5) Generate all possible state sequence and display nth row of it.(n is the last two digit of
your registration number)
4) Finding the probability of Symbol sequence, State sequence
i.e., P(Sym_Seq, State_Seq)
% Find P(sym_sequence,state_sequence)
% It is joint probability
% Let us take T's elements as [ T11 T12 ; T21 T22]
% actual transition prob is transpose of T
T=[0.5 0.5; 0.4 0.6];
state1.h=0.7;state1.t=0.3;state2.h=0.6;
state2.t=0.4;
x='HTTTH';
n=length(x); % length of symbol sequence
N=2^n;
Seq=[];
for i=0:N-1
x=dec2base(i,2,5);
x= strrep( x , '0' , '2' );
Seq=[Seq;x];
end
Seq
% Get sym_sequence probabiity
prodx=1;
stat_seq=Seq(10,:); % take 10th row
for i=1:length(x);
if stat_seq(i)==1
if x(i)=='H'
prodx=prodx*state1.h;
else
prodx=prodx*state1.t;
end
else
if x(i)=='H'
prodx=prodx*state2.h;
else
prodx=prodx*state2.t;
end
end
end
% Get trans_sequence probabiity
prods=1;
for i=1:n-1
j=str2num(stat_seq(i));
k=str2num(stat_seq(i+1));
prods=prods*T(j,k);
end
reqd_prob=vpa(prodx*prods)

Problems for practise:


1) Generate transition probability matrix of brand(soap) switching problem and find its
joint probability. Also determine P(sym_sequence,state_sequence).
2) P( x = HTTTH ,  = 12121)

3) Generate x with length 128 and Find P(X = row corresponds to your last two digit of
your reg. number | Pi = you can take your convenient)
4) Consider a Markov model with two states and six possible emissions. The model uses A red
die, having six sides, labeled 1 through 6. A green die, having twelve sides, five of which
are labeled 2 through 6, while the remaining seven sides are labeled 1. A weighted red coin,
for which the probability of heads is 0.9 and the probability of tails is 0.1. A weighted green
coin, for which the probability of heads is 0.95 and the probability of tails is 0.05. The model
creates a sequence of numbers from the set {1, 2, 3, 4, 5, 6} with the following rules: Begin
by rolling the red die and writing down the number that comes up, which is the emission.
Toss the red coin and do one of the following: If the result is heads, roll the red die and write
down the result. If the result is tails, roll the green die and write down the result. At each
subsequent step, you flip the coin that has the same color as the die you rolled in the previous
step. If the coin comes up heads, roll the same die as in the previous step. If the coin comes
up tails, switch to the other die. Is this model is HMM? Write the tpm and emission matrix

5) Consider the following weather sequence of the three consecutive days

suppose you were locked in a room for several days, and you were asked about the weather outside

The probability that your caretaker carries an umbrella is 0.1 if the weather is sunny, 0.8 if it is
actually raining, and 0.3 if it is foggy.

(i)

(ii)

6) State transition matrix: the probability of the weather today given yesterday’s weather
Weather today
Sunny Cloudy Rainy
Weather
Sunny 0.5 0.2 0.3
Yesterday
Cloudy 0.1 0.6 0.3
Rainy 0.2 0.4 0.4
All we can observe now is the behavior of a dog—only he can see the weather, we cannot! Dog
can be in, out, or standing pathetically on the porch. This depends on the weather in a quantifiable
way. How do we figure out what the weather is if we can only observe the dog?
Dog
in Out porch
Sunny 0.2 0.7 0.1
Cloudy 0.4 0.4 0.2
Rainy 0.7 0.1 0.2
All we observe is the dog: IOOOIPIIIOOOOOPPIIIIIPI
What’s the underlying weather (the hidden states)?
How likely is this sequence, given our model of how the dog works?
What portion of the sequence was generated by each state?
** explore with statistics and machine learning toolbox

Psudo code for Forward Algorithm in HMM

function p=pr_hmm(o,a,b,pi)
%INPUTS:
%O=Given observation sequence labellebd in numerics
%A(N,N)=transition probability matrix
%B(N,M)=Emission matrix
%pi=initial probability matrix
%Output
%P=probability of given sequence in the given model
n=length(a(1,:));
T=length(o);
%it uses forward algorith to compute the probability
for i=1:n %it is initilization
m(1,i)=b(i,o(1))*pi(i);
end
for t=1:(T-1) %recurssion
for j=1:n
z=0;
for i=1:n
z=z+a(i,j)*m(t,i);
end
m(t+1,j)=z*b(j,o(t+1));
end
end
p=0;
for i=1:n %termination
p=p+m(T,i);
end

Psudo code for Backward Algorithm in HMM


Calculate the probability of observations occurred given the hmm's parameters by using
the backward algorithm. make sure the class number begins from 1 to k classes.
function [prob] = backward_Markov(A,B,pi,obs)
beita = zeros(size(A,1),size(obs,2));
beita(:,end) = 1;
i = size(obs,2) - 1
while (i > 0)
temp = A * (B(:,obs(i + 1)) .* beita(:, i+1));
beita(:, i) = temp;
i = i - 1;
end
prob =sum( pi * (B(:,obs(1)) .* beita(:, 1)))

Viterbi Algorithm in HMM


% Transition Probability Matrix
trans = [0.95,0.05;
0.10,0.90];
% Emision Probability matrix
emis = [1/6 1/6 1/6 1/6 1/6 1/6;
1/10 1/10 1/10 1/10 1/10 1/2];
[seq,states] = hmmgenerate(100,trans,emis); % Generating state sequences
estimatedStates = hmmviterbi(seq,trans,emis); % seq- calculates the most likely path
through the hidden Markov model specified by transition probability matrix, TRANS, and
emission probability matrix EMIS. TRANS(i,j) is the probability of transition from state i
to state j. EMIS(i,k) is the probability that symbol k is emitted from state i.
[seq,states] = ...
hmmgenerate(100,trans,emis,...
'Statenames',{'fair';'loaded'});
estimatesStates = ...
hmmviterbi(seq,trans,emis,...
'Statenames',{'fair';'loaded'});

You might also like