Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
22 views32 pages

Deepak DL

Uploaded by

Yaseer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views32 pages

Deepak DL

Uploaded by

Yaseer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

DEPARTMENT OF

COMPUTER SCIENCE AND ENGINEERING

DEEP LEARNING LAB

DEEP LEARNING LAB

Name

Roll Number

Year&Semester

Regulation NECR BTECH 21

Branch

Section
INDEX
Page
S.No Date Name Of The Task Sign.
No.
TASK-1 Date:
AIM: Implementation of different activation functions to train Neural Network.
Introduction:
Activation functions in neural networks introduce non-linearity, allowing the network to learn
complex relationships between inputs and outputs.

Why do we need activation functions?


1. Introduce non-linearity: Without activation functions, neural networks would only be
able to learn linear relationships between inputs and outputs.
2. Enable complex representations: Non-linear activation functions allow networks to
learn complex, abstract representations of the data.
Here are some common activation functions:
1. The sigmoid function: It is a mathematical function commonly used in machine
learning, particularly in logistic regression and neural networks. It has the formula:

Here’s a breakdown of how it works:


 Range: The sigmoid function outputs values between 0 and 1. This makes it particularly
useful for models where you want to interpret the output as a probability.
 Shape: The function has an S-shaped curve (hence the name "sigmoid"). For very large
positive inputs, it asymptotically approaches 1, and for very large negative inputs, it
approaches 0
 Python code to compute the sigmoid function and print the output:
import numpy as np
import matplotlib.pyplot as plt

def sigmoid(x):
return 1 / (1 + np.exp(-x))

# Generate a range of values


x_values = np.linspace(-10, 10, 100)
sigmoid_values = sigmoid(x_values)

# Plotting
plt.plot(x_values, sigmoid_values)
plt.title('Sigmoid Activation Function')
plt.xlabel('Input')
plt.ylabel('Output')
plt.grid(True)
plt.show()

# Print some example values


print("Input values:", x_values[:5]) # Print first 5 inputs for brevity
print("Sigmoid values:", sigmoid_values[:5])
# Print some example values
print("Input values:", x_values[:5]) # Print first 5 inputs for brevity
print("Sigmoid values:", sigmoid_values[:5])

Explanation:
1. Import libraries: NumPy for calculations and Matplotlib for plotting.
2. Define sigmoid(x): This function calculates the sigmoid activation.
3. Generate data: Create an array of input values.
4. Plot: Visualize the sigmoid function.
5. Print values: Display a few example input and output pairs.

Output: Running the code will show a plot of the sigmoid function and print a few sample
values:
Input values: [-10. -9.8 -9.6 -9.4 -9.2]
Sigmoid values: [4.53978687e-05 5.55360353e-05 6.67471570e-05 7.85050321e-05 9.06883501e-05]
 The plot will show an S-shaped curve, which is characteristic of the sigmoid function.
2. Stepwise activation function: Binary step function returns value either 0 or 1.
1. It returns '0' if the input is the less than zero
2. It returns '1' if the input is greater than zero
CODE:
import numpy as np
import matplotlib.pyplot as plt

def stepwise(x):
return np.where(x < 0, 0, 1)

# Generate a range of values


x_values = np.linspace(-10, 10, 100)
stepwise_values = stepwise(x_values)

# Plotting
plt.plot(x_values, stepwise_values, drawstyle='steps-post')
plt.title('Stepwise Activation Function')
plt.xlabel('Input')
plt.ylabel('Output')

plt.grid(True)

plt.ylim(-0.1, 1.1) # Set y-axis limits for better visualization


plt.show()

# Print some example values


print("Input values:", x_values[:5]) # Print first 5 inputs for brevity
print("Stepwise values:", stepwise_values[:5])

Explanation:
1. Import libraries: NumPy for calculations and Matplotlib for plotting.
2. Define stepwise(x): This function uses np.where to apply the stepwise activation.
3. Generate data: Create an array of input values.
4. Plot: Visualize the stepwise function using a step plot.
5. Print values: Display a few sample input and output pairs.
Output: Running this code will display a plot of the stepwise function and print some example
values:
Input values: [-10. -9.8 -9.6 -9.4 -9.2]
Stepwise values: [0 0 0 0 0]
1. The plot shows a vertical step at the threshold (x = 0).
2. For input values less than 0, the function output is 0, and for values greater than or equal to
0, the output is 1.
3. The printed values show the stepwise function outputs corresponding to the first few input
values.

4.
3. The Rectified Linear Unit (ReLU) activation function: It is one of the most commonly used
activation functions in neural networks. It returns the input directly if it is positive; otherwise,
it returns zero. RELU returns 0 if the x (input) is less than 0.RELU returns x if the x (input) is
greater than 0.

CODE:
import numpy as np
import matplotlib.pyplot as plt

# Define the ReLU function


def relu(x):
return np.maximum(0, x)

# Test the function with some sample inputs


inputs = np.array([-3, -1, 0, 1, 3, 5])
outputs = relu(inputs)

print("Inputs: ", inputs)


print("Outputs: ", outputs)

# Plotting the ReLU function


x = np.linspace(-5, 5, 100)
y = relu(x)

plt.plot(x, y)
plt.title('ReLU Activation Function')
plt.xlabel('Input')
plt.ylabel('Output')
plt.grid(True)
plt.show()
Explanation:
 The relu function takes an array of inputs and applies the ReLU operation.
 The code tests the function with a sample array and prints the results.
 It also generates a plot showing how ReLU behaves across a range of inputs.
Sample Output:

Inputs: [-3 -1 0 1 3 5]
Outputs: [0 0 0 1 3 5]

 The plot shows a linear output for positive inputs and zero for negative inputs.

4. Tanh Activation Function: The hyperbolic tangent (tanh) activation function is commonly
used in neural networks. It maps input values to a range between -1 and 1. The mathematical
expression for the tanh function is:
CODE:
import numpy as np

import matplotlib.pyplot as plt

# Define the tanh function

def tanh(x):

return np.tanh(x)

# Test the function with some sample inputs

inputs = np.array([-3, -1, 0, 1, 3, 5])

outputs = tanh(inputs)

print("Inputs: ", inputs)

print("Outputs: ", outputs)

# Plotting the tanh function

x = np.linspace(-5, 5, 100)

y = tanh(x)

plt.plot(x, y)

plt.title('Tanh Activation Function')

plt.xlabel('Input')

plt.ylabel('Output')

plt.grid(True)

plt.show()
Explanation:
 The tanh function uses NumPy's built-in np.tanh() to compute the tanh of the input
array.

 The code tests the function with a sample array and prints the results.
 It also generates a plot showing how the tanh function behaves across a range of inputs.
Output:
Inputs: [-3 -1 0 1 3 5]

Outputs: [-0.99505475 -0.76159416 0. 0.76159416 0.99505475 0.9999092 ]

 The plot shows that the tanh function is S-shaped, with output values approaching -1 for
large negative inputs and +1 for large positive inputs .

Conclusion:
Different activation functions to train Neural Network are implemented successfully.
TASK-2
AIM: Build a deep neural network model start with linear regression using a single variable.

1. Problem Definition:

 We want to predict the output y based on a single input feature x using a deep neural
network.

2. Libraries Needed:
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

3. Generate Synthetic Data:


 We will create a dataset where the relationship between x and y is roughly linear with
some noise added.
# Generating synthetic data
np.random.seed(42) # for reproducibility
x = np.linspace(0, 10, 100)
y = 2 * x + 3 + np.random.normal(0, 1, size=x.shape) # y = 2x + 3 + noise

# Plot the data


plt.scatter(x, y)
plt.title("Generated Data")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
4. Linear Regression Using a Single Neuron:
 We will start by building a basic model with a single neuron, which is equivalent to linear
regression.
# Define a simple linear regression model
model = Sequential([
Dense(1, input_dim=1) # 1 input feature, 1 output
])
# Compile the model
model.compile(optimizer='adam', loss='mse')

# Train the model


model.fit(x, y, epochs=200, verbose=0)

# Predicting using the trained model


y_pred = model.predict(x)

# Plot the results


plt.scatter(x, y, label='True Data')
plt.plot(x, y_pred, color='red', label='Predicted Line')
plt.title("Linear Regression with a Single Variable")
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()
5. Expanding to a Deep Neural Network:
 Now, let's expand this into a deeper model with multiple layers.
# Define a deeper neural network model
deep_model = Sequential([
Dense(10, input_dim=1, activation='relu'), # First hidden layer with 10 neurons
Dense(8, activation='relu'), # Second hidden layer with 8 neurons
Dense(4, activation='relu'), # Third hidden layer with 4 neurons
Dense(1) # Output layer
])

# Compile the model


deep_model.compile(optimizer='adam', loss='mse')

# Train the model


deep_model.fit(x, y, epochs=200, verbose=0)

# Predicting using the trained deep model


y_deep_pred = deep_model.predict(x)

# Plot the results


plt.scatter(x, y, label='True Data')
plt.plot(x, y_deep_pred, color='red', label='Deep Model Prediction')
plt.title("Deep Neural Network with Single Variable")
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()
CODE:
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Generating synthetic data
np.random.seed(42) # for reproducibility
x = np.linspace(0, 10, 100)
y = 2 * x + 3 + np.random.normal(0, 1, size=x.shape) # y = 2x + 3 + noise

# Plot the data


plt.scatter(x, y)
plt.title("Generated Data")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
# Define a simple linear regression model
model = Sequential([
Dense(1, input_dim=1) # 1 input feature, 1 output
])

# Compile the model


model.compile(optimizer='adam', loss='mse')
# Train the model
model.fit(x, y, epochs=200, verbose=0)

# Predicting using the trained model


y_pred = model.predict(x)

# Plot the results


plt.scatter(x, y, label='True Data')
plt.plot(x, y_pred, color='red', label='Predicted Line')
plt.title("Linear Regression with a Single Variable")
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()
# Define a deeper neural network model
deep_model = Sequential([
Dense(10, input_dim=1, activation='relu'), # First hidden layer with 10 neurons
Dense(8, activation='relu'), # Second hidden layer with 8 neurons
Dense(4, activation='relu'), # Third hidden layer with 4 neurons
Dense(1) # Output layer
])

# Compile the model


deep_model.compile(optimizer='adam', loss='mse')

# Train the model


deep_model.fit(x, y, epochs=200, verbose=0)

# Predicting using the trained deep model


y_deep_pred = deep_model.predict(x)

# Plot the results


plt.scatter(x, y, label='True Data')
plt.plot(x, y_deep_pred, color='red', label='Deep Model Prediction')
plt.title("Deep Neural Network with Single Variable")
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()

Explanation:
 First Model: The simple model with one neuron is equivalent to linear regression. The
goal is to learn a linear relationship between xxx and yyy.
 Deeper Model: The deeper model introduces multiple layers with non-linear activations
(relu). This allows the model to learn more complex relationships, although in this
simple problem, the underlying relationship is linear.
Output:
1. The first output plot shows a straight line (linear regression) fitting the data.
2. The second output plot from the deep model will show a more flexible fit, but since the
data is linear, both models should perform similarly.
Conclusion: A deep neural network model start with linear regression using a single variable
implemented successfully.
TASK-3

Aim: Build a feed forward neural network for predication of logic gates.
Introduction: Using TensorFlow and Keras to build a feed-forward neural network for
predicting the output of basic logic gates (AND, OR, XOR, and NOT). We will start by
implementing a model for the AND gate.
Step-by-Step Guide:
1. Install TensorFlow: Ensure you have TensorFlow installed in your environment. You can
install it using pip:
pip install tensorflow
2. Create the Dataset: Define the inputs and outputs for the logic gates.
3. Build the Model: Create a simple feed-forward neural network using Keras.
4. Compile and Train the Model: Train the model on the logic gate data.
5. Evaluate the Model: Test the model to see how well it performs.
CODE:
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam

# Step 1: Create the dataset


# Truth table for AND, OR, and XOR gates
# Inputs
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
# Outputs
y_and = np.array([[0], [0], [0], [1]]) # AND gate
y_or = np.array([[0], [1], [1], [1]]) # OR gate
y_xor = np.array([[0], [1], [1], [0]]) # XOR gate
# Function to build the model
def build_model():
model = Sequential()
model.add(Dense(8, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
return model
# Train and evaluate the model for each logic gate
for gate_name, y in zip(['AND', 'OR', 'XOR'], [y_and, y_or, y_xor]):
print(f'\nTraining model for {gate_name} gate')
# Step 2: Build the model
model = build_model()
# Step 3: Compile the model
model.compile(optimizer=Adam(learning_rate=0.01), loss='binary_crossentropy',
metrics=['accuracy'])
# Step 4: Train the model
model.fit(X, y, epochs=1000, verbose=0)
# Step 5: Evaluate the model
_, accuracy = model.evaluate(X, y)
print(f'Accuracy for {gate_name} gate: {accuracy * 100:.2f}%')
# Make predictions
predictions = model.predict(X)
print(f'Predictions for {gate_name} gate:')
print(np.round(predictions).astype(int))
# Output the predictions for each gate
for gate_name, y in zip(['AND', 'OR', 'XOR'], [y_and, y_or, y_xor]):
model = build_model()
model.compile(optimizer=Adam(learning_rate=0.01), loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(X, y, epochs=1000, verbose=0)
predictions = model.predict(X)
print(f'\n{gate_name} gate predictions:')
for input_val, pred in zip(X, np.round(predictions).astype(int)):
print(f'Input: {input_val}, Prediction: {pred[0]}')
Explanation:
1. Data Preparation:
o The inputs X represent the possible inputs to the logic gates.
o The outputs y_and, y_or, and y_xor represent the truth tables for the AND, OR,
and XOR gates, respectively.
2. Model Building:
o A simple feed-forward neural network is built using Sequential from Keras. It has
one hidden layer with 8 neurons and ReLU activation, and an output layer with 1
neuron and sigmoid activation.
3. Model Compilation:
o The model is compiled using the Adam optimizer and binary cross-entropy loss
function, appropriate for binary classification.
4. Model Training:
o The model is trained for 1000 epochs for each logic gate.
5. Evaluation and Prediction:
o The trained model is evaluated on the training data to check its accuracy.
o Predictions are made on the input data, and the results are printed.

OUTPUT:
Conclusion: Builds and trains a simple feed-forward neural network for predicting the output
of basic logic gates. The models are evaluated, and their accuracy is printed. Finally, the trained
models are used to make predictions on the input data.
TASK-4

Aim: Write a Program for Time-Series Forecasting with the LSTM Model.

Step-by-Step Guide:

1. Install TensorFlow: Ensure you have TensorFlow installed in your environment. You can
install it using pip:
pip install tensorflow
2. Create the Dataset: Generate a sine wave and prepare it for training the LSTM model.
3. Build the Model: Create an LSTM model using Keras.

4. Compile and Train the Model: Train the model on the time series data.

5. Make Predictions: Use the trained model to make forecasts.

CODE:
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
from sklearn.preprocessing import MinMaxScaler
# Step 1: Generate the dataset
# Create a sine wave as the time series data
t = np.linspace(0, 100, 1000)
data = np.sin(t)
# Normalize the data
scaler = MinMaxScaler(feature_range=(0, 1))
data = scaler.fit_transform(data.reshape(-1, 1))
# Prepare the dataset for LSTM
def create_dataset(data, time_step=1):
X, y = [], []
for i in range(len(data) - time_step - 1):
a = data[i:(i + time_step), 0]
X.append(a)
y.append(data[i + time_step, 0])
return np.array(X), np.array(y)
time_step = 10
X, y = create_dataset(data, time_step)
# Reshape input to be [samples, time steps, features]
X = X.reshape(X.shape[0], X.shape[1], 1)
# Step 2: Build the LSTM model
model = Sequential()
model.add(LSTM(50, return_sequences=True, input_shape=(time_step, 1)))
model.add(LSTM(50, return_sequences=False))
model.add(Dense(1))
# Step 3: Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Step 4: Train the model
model.fit(X, y, epochs=100, batch_size=32, verbose=1)
# Step 5: Make predictions
train_predict = model.predict(X)
# Invert predictions
train_predict = scaler.inverse_transform(train_predict)
y = scaler.inverse_transform(y.reshape(-1, 1))
# Plot the results
plt.figure(figsize=(10, 6))
plt.plot(scaler.inverse_transform(data), label='Original Data')
plt.plot(np.arange(time_step, time_step + len(train_predict)), train_predict, label='Predicted
Data')
plt.legend()
plt.show()
Explanation:
1. Data Generation and Preparation:
o Generate a sine wave using np.sin.
o Normalize the data using MinMaxScaler to scale it between 0 and 1.
o Prepare the dataset for the LSTM model. The create_dataset function creates
input-output pairs for the LSTM with a specified time_step.
2. Model Building:
o Create a sequential LSTM model with two LSTM layers and one Dense layer. The
first LSTM layer returns sequences, while the second does not.
3. Model Compilation:
o Compile the model using the Adam optimizer and mean squared error loss
function.
4. Model Training:
o Train the model on the prepared dataset for 100 epochs with a batch size of 32.
5. Making Predictions:
o Use the trained model to make predictions on the training data.
o Invert the predictions back to the original scale using MinMaxScaler.
o Plot the original data and the predicted data to visualize the forecasting
performance
OUTPUT:
Conclusion: The above eample demonstrates a basic LSTM model for time series forecasting. In
a real-world scenario, you would use more complex data and potentially more sophisticated
preprocessing and model tuning techniques.
TASK-5

Aim: Write a Program for character recognition using CNN.

Step-by-Step Guide:

1. Install Dependencies: Ensure TensorFlow and other necessary libraries are installed.
2. Load and Preprocess the Dataset: Load the MNIST dataset and preprocess it.
3. Define the CNN Model: Define a CNN model for character recognition.
4. Train the Model: Train the CNN model on the MNIST dataset.
5. Evaluate and Visualize the Results: Test the model on new data and visualize the results.
CODE:
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
import matplotlib.pyplot as plt
import numpy as np

# Load the MNIST dataset


(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Preprocess the data


x_train = x_train.reshape((x_train.shape[0], 28, 28, 1)).astype('float32') / 255
x_test = x_test.reshape((x_test.shape[0], 28, 28, 1)).astype('float32') / 255

# One-hot encode the labels


y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)

# Define the CNN model


model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
MaxPooling2D((2, 2)),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D((2, 2)),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.5),
Dense(10, activation='softmax')
])

# Compile the model


model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

model.summary()
# Train the model
history = model.fit(x_train, y_train, epochs=10, batch_size=128, validation_data=(x_test,
y_test))

# Evaluate the model on the test set


test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)
print('\nTest accuracy:', test_acc)

# Function to display image and prediction


def display_prediction(index):
plt.imshow(x_test[index].reshape(28, 28), cmap='gray')
plt.axis('off')
prediction = model.predict(np.expand_dims(x_test[index], axis=0))
predicted_label = np.argmax(prediction)
plt.title(f'Predicted: {predicted_label}, True: {np.argmax(y_test[index])}')
plt.show()

# Display some sample predictions


for i in range(5):
display_prediction(i)

OUTPUT:
Conclusion: The above Program covers loading the MNIST dataset, preprocessing the data,
defining and training a CNN model, and evaluating and visualizing the results.
TASK-6
Aim: Write a Program to implement deep learning techniques for image segmentation.
Step-by-Step Guide:
 The process of splitting images into multiple layers, represented by a smart, pixel-wise
mask is known as Image Segmentation.
 It involves merging, blocking, and separating an image from its integration level. Splitting a
picture into a collection of Image Objects with comparable properties is the first stage in
image processing.
 Scikit-Image is the most popular tool/module for image processing in Python.

1. Installation: To install this module type the below command in the terminal.

pip install scikit-image

2.Converting Image Format: RGB to Grayscale

 Rgb2gray module of skimage package is used to convert a 3-channel RGB Image to


one channel monochrome image.
 In order to apply filters and other processing techniques, the expected input is a two-
dimensional vector i.e. a monochrome image.

skimage.color.rgb2gray() function is used to convert an RGB image to Grayscale format

Syntax : skimage.color.rgb2gray(image)

Parameters : image : An image – RGB format

Return : The image – Grayscale format

CODE:
# Importing Necessary Libraries
from skimage import data
from skimage.color import rgb2gray
import matplotlib.pyplot as plt

# Setting the plot size to 15,15


plt.figure(figsize=(15, 15))

# Sample Image of scikit-image package


coffee = data.coffee()
plt.subplot(1, 2, 1)

# Displaying the sample image


plt.imshow(coffee)

# Converting RGB image to Monochrome


gray_coffee = rgb2gray(coffee)
plt.subplot(1, 2, 2)

# Displaying the sample image - Monochrome


# Format
plt.imshow(gray_coffee, cmap="gray")

OUTPUT:

Conclusion: We will be using grayscale images for the proper implementation of thresholding
functions. The average of the red, green, and blue pixel values for each pixel to get the
grayscale value is a simple approach to convert a color picture 3D array to a grayscale 2D
array. This creates an acceptable gray approximation by combining the lightness or brightness
contributions of each color band.

You might also like