Soft Computing
300 (3)
Dr. Lekshmi R. R.
Asst. Prof.
Department of Electrical & Electronics
Engineering
Amrita School of Engineering 1
Adaptive Linear Neuron
(ADALINE)
Architecture
1
𝑏 Single layer neural network
𝑥1 𝑋1 𝑤1
y
𝑤2 𝑌
𝑥2
𝑋2
Single layer neural network
Training Identity 𝑦 = 𝑦𝑖𝑛
Testing 1 𝑖𝑓 𝑦𝑖𝑛 ≥ 0
Bipolar 𝑦 = ቊ
−1𝑖𝑓 𝑦𝑖𝑛 < 0
Training Algorithm
Delta/ Widrow Hoff rule
Step 1: Initialize weights and bias. Step 7: Adjust the weights and bias if error exist
Set learning rate α (0< α<1) (y≠t) new
Step 2: While termination condition is false, do 𝑤𝑖 = 𝑤𝑖 𝑜𝑙𝑑 + 𝛼(𝑡 − 𝑦)𝑥𝑖
steps 3 - 8 𝑏new = 𝑏𝑜𝑙𝑑 + 𝛼(𝑡 − 𝑦)
Step 3: For each training pair, do steps 4-7
Step 4: Set input activation functions; xi=si Else new
𝑤𝑖 = 𝑤𝑖 𝑜𝑙𝑑
Step 5: Compute response of system
𝑛 𝑏new = 𝑏𝑜𝑙𝑑
𝑦𝑖𝑛 = 𝑏 + 𝑥𝑖 𝑤𝑖
𝑖=1
Step 8: Test stopping condition
Step 6: Set output activation functions
𝑦 = 𝑦𝑖𝑛 – If no weights are changed, stop
– Else continue
Testing Algorithm
Step 1: Initialize weights and bias.
Step 2: For each training pair, do steps 3-5
Step 3: Set input activation functions; xi=si
Step 4: Compute response of system
𝑛
𝑦𝑖𝑛 = 𝑏 + 𝑥𝑖 𝑤𝑖
𝑖=1
Step 5: Set output activation functions
1 𝑖𝑓 𝑦𝑖𝑛 ≥ 0
𝑦=ቊ
−1 𝑖𝑓 𝑦𝑖𝑛 < 0
Design an ADALINE to implement OR gate
No. of inputs: 3 Weight Value
x1 x2 1 t
𝑤1 0.1
1 1 1 1 No. of outputs:1
𝑤2 0.2
1 -1 1 1
b 0.3
-1 1 1 1
-1 -1 1 -1 𝐧𝐞𝐰
Epoch 1 𝒘𝒊 = 𝒘𝒊 𝒐𝒍𝒅 + 𝜶(𝒕 − 𝒚)𝒙𝒊
𝒃𝐧𝐞𝐰 = 𝒃𝒐𝒍𝒅 + 𝜶(𝒕 − 𝒚)
α= 0.1 𝑦 = 𝑦𝑖𝑛
x1 x2 1 t yin y Y=t? Δw1 Δw2 Δb w1 w2 b
0.1 0.2 0.3
1 1 1 1 0.6 0.6 N 0.04 0.04 0.04 0.14 0.24 0.34
1 -1 1 1 0.24 0.24 N 0.076 -0.076 0.076 0.216 0.164 0.416
-1 1 1 1 0.364 0.364 N -0.0636 0.0636 0.0636 0.1524 0.2276 0.4796
-1 -1 1 -1 0.0996 0.0996 N 0.10996-0.10996-0.10996 0.263 0.3375 0.3696
Thank you