Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
48 views38 pages

AD3511 - Deep Learning Lab Manual

The document outlines a series of experiments for a Deep Learning Laboratory course at Ramco Institute of Technology, including tasks such as solving the XOR problem, character recognition, face recognition, language modeling, and sentiment analysis using various neural network architectures. Each experiment includes an aim, algorithm, and Python program demonstrating the implementation of the respective deep learning techniques. The document serves as a practical guide for students to apply deep learning concepts through hands-on coding exercises.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views38 pages

AD3511 - Deep Learning Lab Manual

The document outlines a series of experiments for a Deep Learning Laboratory course at Ramco Institute of Technology, including tasks such as solving the XOR problem, character recognition, face recognition, language modeling, and sentiment analysis using various neural network architectures. Each experiment includes an aim, algorithm, and Python program demonstrating the implementation of the respective deep learning techniques. The document serves as a practical guide for students to apply deep learning concepts through hands-on coding exercises.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

RAMCO INSTITUTE OF TECHNOLOGY

RAJAPALAYAM-626117.

III YEAR / V SEMESTER


AD3511 - DEEP LEARNING LABARATORY

Prepared by,
B. REVATHI
Assistant Professor
AI & DS
Index
Ex.No List of Experiments
1 Solving XOR Problem using DNN

2 Character Recognition using CNN

3 Face Recognition using CNN

4 Language Modeling using RNN

5 Sentiment Analysis using LSTM

6 Parts of Speech Tagging using Sequence to Sequence


Architecture

7 Machine Translation Using Encoder and Decoder Model

8 Image Augumentation using GANS

9 Mini Project on Real World Applications

Ex:No1 Solving XOR problem using DNN


AIM:
To write a python program of Solving XOR problem using DNN
Algorithm:
Step1: Start
Step 2: Data Preparation Generate the XOR truth table
Step 3: Create a deep neural network (DNN) with an input layer, one or more hidden layers,
and an output layer.
Step 4: Define the number of neurons in each layer and choose an activation function for each
layer.
Step 5: Train the DNN using the XOR truth table as training data and labels.
Step 6: Test the trained model with new XOR input combinations to evaluate its performance.
Step7: Stop
Program:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
# Input and output data for XOR
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
# Create a Sequential model
model = Sequential()
# Add layers to the model
model.add(Dense(8, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Train the model
model.fit(X, y, epochs=10000, verbose=0)
# Evaluate the model
loss, accuracy = model.evaluate(X, y)
print(f"Loss: {loss:.4f}, Accuracy: {accuracy*100:.2f}%")

# Make predictions
predictions = model.predict(X)
print("Predictions:")
for i in range(len(predictions)):
print(f"Input: {X[i]}, Prediction: {predictions[i][0]:.4f}")
Output:

Result:
Thus the program of Solving XOR problem using DNN was executed and written
successfully.

Ex:No:2 Character recognition using CNN


AIM:
To write a python program Character recognition using CNN

Algorithm:
Step1: Start
Step 1: Gather a labeled dataset of handwritten characters, such as MNIST or a custom
dataset.
Step 2: Model Architecture
Design a CNN architecture suitable for character recognition.
Step 3: Define Model Parameters
Specify hyperparameters like learning rate, batch size, number of epochs, and regularization
techniques (e.g., dropout).
Step 4: Build the CNN Model
Implement the CNN architecture using a deep learning framework like TensorFlow or
PyTorch.
Step 5: Compile the Model
Compile the CNN model using the specified loss function, optimizer, and metrics (e.g.,
accuracy).
Step 6: Train the Model
Train the CNN model using the preprocessed training data.
Step 7: Evaluate the Model
Evaluate the trained model using the preprocessed testing data.
Step 8: Stop
Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from tensorflow.keras.utils import to_categorical
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0
x_train = np.expand_dims(x_train, axis=-1)
x_test = np.expand_dims(x_test, axis=-1)
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
x_train = np.expand_dims(x_train, axis=-1)
x_test = np.expand_dims(x_test, axis=-1)
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
import matplotlib.pyplot as plt
# Visualize some of the train images
num_samples_to_visualize = 5
plt.figure(figsize=(15, 3))
for i in range(num_samples_to_visualize):
plt.subplot(1, num_samples_to_visualize, i+1)
plt.imshow(x_train[i].reshape(28, 28), cmap='gray')
plt.title(f"Label: {np.argmax(y_train[i])}")
plt.axis('off')
plt.show()
Output:

model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
MaxPooling2D((2, 2)),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D((2, 2)),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.5),
Dense(10, activation='softmax')])
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=15, batch_size=64,
validation_split=0.1)
Output:

# Evaluate the model on the test set


test_loss, test_accuracy = model.evaluate(x_test, y_test)
print(f"Test accuracy: {test_accuracy}")

Output:

# Choose an index from the test set


import matplotlib.pyplot as plt
index = 0
# Get the image and its label
test_image = x_test[index]
true_label = np.argmax(y_test[index])
# Make a prediction
prediction = model.predict(np.expand_dims(test_image, axis=0))
predicted_label = np.argmax(prediction)
# Calculate accuracy for this individual image
accuracy = 100 * (predicted_label == true_label)
# Display the test image
plt.imshow(test_image.reshape(28, 28), cmap='gray')
plt.title(f"True Label: {true_label}, Predicted Label: {predicted_label}, Accuracy:
{accuracy:.2f}%")
plt.axis('off')
plt.show()

Output:

Result:
Thus the program of Character recognition using CNN was executed and written
successfully.
EX.NO 3 Face Recognition using CNN

AIM:
To write a python program Face Recognition using CNN
Algorithm:
Step 1: Collect a dataset of facial images with labels.
Step 2: Data Pre-processing
Resize all facial images to a consistent size.
Normalize pixel values, Augment the dataset.
Step 3: Divide the dataset into training, validation, and test sets.
Step 4: Build the CNN Model. Design the CNN architecture for face recognition. Add
convolutional layers, pooling layers, and fully connected layers. Use activation functions like
ReLU.
Step 5: Model Compilation
Step 6: Model Train the CNN model on the training dataset.
Use the validation dataset to monitor performance and prevent overfitting.
Step 7: Model Evaluation
Evaluate the trained model on the test dataset.Calculate metrics like accuracy, precision,
recall, and F1-score.

Program:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import RMSprop
import tensorflow as tf
import matplotlib.pyplot as plt
import cv2
import os
import numpy as np
img = image.load_img("/content/drive/MyDrive/deep learning/archive
(1)/basedata/training/face/person_0000.jpg")
plt.imshow(img)
Output:

cv2.imread("/content/drive/MyDrive/deep learning/archive
(1)/basedata/training/face/person_0000.jpg").shape
Output:

train = ImageDataGenerator(rescale = 1/255)


validation =ImageDataGenerator(rescale = 1/255)
trained_dataset = train.flow_from_directory("/content/drive/MyDrive/deep learning/archive
(1)/basedata/training",target_size=(200,200),batch_size= 3, class_mode ='binary')
validation_dataset = train.flow_from_directory("/content/drive/MyDrive/deep
learning/archive (1)/basedata/validation",target_size=(200,200),batch_size= 3,class_mode
='binary')
Output:

trained_dataset.class_indices
Output:

trained_dataset. classes
Output:
model=tf.keras.models.Sequential([tf.keras.layers.Conv2D(16,
(3,3),activation='relu',input_shape=(200,200,3)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Conv2D(32,(3,3),activation='relu',input_shape=(200,200,3)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Conv2D(64,(3,3),activation='relu',input_shape=(200,200,3)),
tf.keras.layers.MaxPool2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512,activation='relu'),
tf.keras.layers.Dense(1,activation='sigmoid')
])
model.compile(loss ='binary_crossentropy',
optimizer = RMSprop(learning_rate=0.001),
metrics=['accuracy'])
model_fit = model.fit(trained_dataset,
steps_per_epoch = 3,
epochs =35,
validation_data = validation_dataset)
Output:

dir_path = "/content/drive/MyDrive/deep learning/testing"


for i in os.listdir(dir_path):
img = image.load_img(dir_path + '//' + i,target_size=(200,200))
plt.imshow(img)
plt.show()
X= image.img_to_array(img)
X = np.expand_dims(X,axis=0)
images = np.vstack([X])
val = model.predict(images)
if val == 0:
print("Face detected")
else:
print("Face not detected")
Output:

validation_accuracy = model.evaluate(validation_dataset)
print(f"validation accuracy: {validation_accuracy[1] * 100:.2f}%")
Output:

import tensorflow.keras.models
model.save('detection_model.h5')
loaded_model = tf.keras.models.load_model('detection_model.h5')
loaded_model
Result:
Thus the program Face Recognition using CNN was executed and written successfully.
EX:NO.4 Language Modeling using RNN

Aim:
To Write a python Program Language modeling using RNN
Algorithm:
Step 1: Data Collection
Gather a large corpus of text data for training the language model. This text data can be from
books, articles, websites, or any source relevant to your specific task.
Step 2: Data Preprocessing
Tokenize the text into words or subword units (e.g., using tools like NLTK or spaCy).
Step 3: Data Sequencing
Divide the text data into sequences or chunks of a fixed length (e.g., sentences or
paragraphs).
Step 4: Model Architecture
Choose an RNN architecture, such as LSTM (Long Short-Term Memory) or GRU (Gated
Recurrent Unit), or a combination of multiple layers.
Step 5: Embedding Layer (Optional)
If needed, include an embedding layer to convert integer IDs to dense vector representations.
Step 6: Model Compilation
Choose a loss function, typically categorical cross-entropy for language modeling tasks.
Step 7: Model Training
Train the RNN model using the preprocessed data.
Step 8: Model Evaluation
Evaluate the trained model's performance on a separate test dataset.

Program:
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

df = pd.read_csv('/content/drive/MyDrive/deep
learning/Times_of_India_Healines_since_jan_2020_score.csv')
df
Output:

headlines = df['Headline'].tolist()
# Assuming headlines is a list of strings with some non-text values
headlines = [str(headline) for headline in headlines]
headlines = headlines[:20]
tokenizer = Tokenizer()
tokenizer.fit_on_texts(headlines)
total_words = len(tokenizer.word_index) + 1
len(headlines)

Output:

Headlines
Output:

input_sequences = []
for line in headlines:
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)
max_sequence_length = max([len(seq) for seq in input_sequences])
input_sequences = pad_sequences(input_sequences, maxlen=max_sequence_length,
padding='pre')
predictors, label = input_sequences[:, :-1], input_sequences[:, -1]
label = tf.keras.utils.to_categorical(label, num_classes=total_words)
model = Sequential()
model.add(Embedding(total_words, 100, input_length=max_sequence_length-1)) # Reduced
embedding dimension
model.add(LSTM(150)) # Reduced LSTM units
model.add(Dense(total_words, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(predictors, label, epochs=100, verbose=1)
import numpy as np
def generate_text(seed_text, next_words, model, max_sequence_length, tokenizer,
temperature=1.0):
generated_text = seed_text
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_length-1,
padding='pre')
predicted_probabilities = model.predict(token_list, verbose=0)[0]
# Apply temperature to the predicted probabilities
predicted_probabilities = np.log(predicted_probabilities) / temperature
predicted_probabilities = np.exp(predicted_probabilities)
predicted_probabilities /= np.sum(predicted_probabilities)
# Sample a word based on the adjusted probabilities
predicted = np.random.choice(len(predicted_probabilities), size=1,
p=predicted_probabilities)[0]
output_word = ""
for word, index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed_text += " " + output_word
generated_text += " " + output_word
return generated_text
seed_text = "Music helped me find my purpose in life"
next_words = 20 # Number of words to generate
generated_text = generate_text(seed_text, next_words, model, max_sequence_length,
tokenizer)
print(generated_text)
Output:

Result:
Thus the program Language modeling using RNN was executed and written successfully.
EX:NO:5 Sentiment analysis using LSTM

AIM:
To write a python program Sentiment analysis using LSTM
Algorithm:
Step 1: Data Collection
Gather a dataset with labeled text examples and their corresponding sentiment labels (e.g.,
positive, negative, or neutral).
Step 2: Data Preprocessing
Tokenize the text: Split the text into individual words or subword tokens.
Convert text to numerical form: Assign a unique numerical ID to each word/token (word
embedding).
Pad sequences: Ensure all sequences have the same length by padding or truncating them as
needed.
Step 3: Split the Dataset
Divide your dataset into three subsets: training, validation, and test sets.
Step 4: LSTM Model Architecture
Define an LSTM model architecture with layers such as embedding, LSTM, and dense layers.
Configure input sequence length, embedding dimension, LSTM units, and the number of
output units corresponding to sentiment classes.
Step 5: Compile the Model
Choose a loss function, an optimizer, and evaluation metrics.
Step 6: Training
Train the LSTM model on the training dataset.
Monitor training progress with validation data.
Use techniques like early stopping to prevent overfitting.
Step 7: Model Evaluation
Evaluate the model's performance on the test dataset using metrics like accuracy and F1
score.
Program:
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
import re
import pickle
import numpy as np
import pandas as pd
# Plot libraries
import seaborn as sns
from wordcloud import WordCloud
import matplotlib.pyplot as plt
df = pd.read_csv("/content/drive/MyDrive/deep learning/movie.csv")
df
Output:

# Tokenize the text


tokenizer = Tokenizer(num_words=5000, oov_token='<OOV>')
tokenizer.fit_on_texts(texts)
# Separate text and labels
texts = df['text'].tolist()
labels = df['label'].tolist()
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
# Convert text to sequences
sequences = tokenizer.texts_to_sequences(texts)
# Pad sequences to have the same length
max_length = 100 # Choose an appropriate value
padded_sequences = pad_sequences(sequences, maxlen=max_length, padding='post',
truncating='post')
labels = to_categorical(labels, num_classes=2)
from sklearn.model_selection import train_test_split
train_texts, test_texts, train_labels, test_labels = train_test_split(padded_sequences, labels,
test_size=0.2, random_state=42)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense
model = Sequential()
model.add(Embedding(input_dim=5000, output_dim=16, input_length=max_length))
model.add(LSTM(128))
model.add(Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(train_texts, train_labels, epochs=5, batch_size=64, validation_data=(test_texts,
test_labels))
Output:
model.save('your_lstm_model.h5')
loss, accuracy = model.evaluate(test_texts, test_labels)
print(f'Loss: {loss:.4f}, Accuracy: {accuracy:.4f}')
Output:

# Example sentiment prediction


new_texts = ["This movie is great!"]
new_sequences = tokenizer.texts_to_sequences(new_texts)
new_padded = pad_sequences(new_sequences, maxlen=max_length, padding='post',
truncating='post')
predictions = model.predict(new_padded)
print(predictions)
sentiment = model.predict(new_padded,batch_size=1,verbose=2)[0]
if np.argmax(sentiment) == 0:
print("negetive")
elif np.argmax(sentiment) == 1:
print("positive")
Output:

Result:
Thus the program Sentiment analysis using LSTM was executed and written
successfully.
EX.No.6 Parts of speech Tagging using Sequence to sequence Architecture

Aim:
To write a python program Parts of speech Tagging using Sequence to sequence
Architecture
Algorithm:
Step 1: Data Preparation
Data preprocessing: Tokenize sentences into words and tags, and create a vocabulary of
words and part-of-speech tags. You may need to pad or truncate sentences to a fixed length.
Step 2: Model Architecture
Sequence-to-Sequence Model: Create a sequence-to-sequence model with an encoder-
decoder architecture. You can use a recurrent neural network (RNN), a long short-term
memory (LSTM) network, or a transformer-based model. Encoder: The encoder processes the
input sentence (sequence of words). It can be a bidirectional RNN or a transformer encoder.
The encoder's output will be a representation of the input sentence.
Decoder: The decoder takes the encoder's output and generates a sequence of part-of-speech
tags. It can be a unidirectional RNN or a transformer decoder.
Output Layer: Use a softmax layer at the output of the decoder to predict the part-of-speech
tag for each word in the input sentence. The number of output units in the softmax layer
should match the number of unique part-of-speech tags in your dataset.
Step 3: Training
Loss Function: Use a categorical cross-entropy loss to compute the error between predicted
part-of-speech tags and the ground truth tags.
Optimizer: Utilize an optimizer like Adam or SGD to minimize the loss during training.
Training Loop: Train the model on the training data. Monitor the loss on the validation set to
prevent overfitting.
Hyperparameter Tuning: Experiment with hyperparameters such as learning rate, batch size,
and model architecture to optimize performance.
Step 4: Evaluation
Testing: Evaluate the trained model on the test dataset to assess its accuracy and performance.
Performance Metrics: Use metrics like accuracy, precision, recall, and F1-score to measure
the quality of part-of-speech tagging.
Program:
import torch
import torch.nn as nn
import torch.optim as optim
input_sequence = [&quot;She&quot;, &quot;reads&quot;, &quot;a&quot;,
&quot;book&quot;]
output_sequence = [&quot;PRP&quot;, &quot;VBZ&quot;, &quot;T&quot;,
&quot;N&quot;]
input_vocab = set(input_sequence)
output_vocab = set(output_sequence)
input_word2index = {word: i for i, word in enumerate(input_vocab)}
input_index2word = {i: word for i, word in enumerate(input_vocab)}
output_word2index = {tag: i for i, tag in enumerate(output_vocab)}
output_index2word = {i: tag for i, tag in enumerate(output_vocab)}
input_indices = [input_word2index[word] for word in input_sequence]
output_indices = [output_word2index[tag] for tag in output_sequence]
class Seq2Seq(nn.Module):
def __init__(self, input_size, output_size, hidden_size):
super(Seq2Seq, self).__init__()
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size)
self.out = nn.Linear(hidden_size, output_size)
def forward(self, input_seq):
embedded = self.embedding(input_seq)
output, hidden = self.gru(embedded)
output = self.out(output)
return output
input_size = len(input_vocab)
output_size = len(output_vocab)
hidden_size = 256
learning_rate = 0.01
epochs = 100
model = Seq2Seq(input_size, output_size, hidden_size)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(epochs):
optimizer.zero_grad()
input_tensor = torch.tensor(input_indices, dtype=torch.long)
output_tensor = torch.tensor(output_indices, dtype=torch.long)
output = model(input_tensor)
loss = criterion(output.view(-1, output_size), output_tensor.view(-1))
loss.backward()
optimizer.step()
if (epoch + 1) % 10 == 0:

print(f&#39;Epoch [{epoch + 1}/{epochs}], Loss: {loss.item():.4f}&#39;)


Output:

# Create word-to-index and index-to-word mappings for input


input_word2index = {word: i for i, word in enumerate(input_vocab)}
input_word2index[&#39;&lt;UNK&gt;&#39;] = len(input_vocab) # Add an unknown word
token
input_index2word = {i: word for i, word in enumerate(input_vocab)}
input_index2word[len(input_vocab)] = &#39;&lt;UNK&gt;&#39;
# Inference
with torch.no_grad():
input_test = [&quot;She&quot;, &quot;reads&quot;, &quot;a&quot;, &quot;book&quot;]
input_test_indices = [input_word2index.get(word,
input_word2index[&#39;&lt;UNK&gt;&#39;]) for word
in input_test]
input_test_tensor = torch.tensor(input_test_indices, dtype=torch.long).unsqueeze(1) # Add
a batch dimension
# Initialize the hidden state for the model
hidden = torch.zeros(1, 1, hidden_size)
predicted_indices = model(input_test_tensor)
# Extract the most likely tag for each word in the sequence
predicted_indices = predicted_indices.argmax(dim=2)
# Convert to a list of tag indices
predicted_indices = predicted_indices.view(-1).tolist()
# Convert the predicted indices to tags
predicted_tags = [output_index2word[i] for i in predicted_indices]
print(&quot;Input sentence:&quot;, input_test)
print(&quot;Predicted tags:&quot;, predicted_tags)
Output:

Result:
EX.NO. 7 Machine Translation Using Encoder – Decoder model

Thus the program of Parts of speech Tagging using Sequence to sequence


Architecture was executed and written successfully.

Aim:
To write a python program of Machine Translation Using Encode – Decoder model
Algorithm:
Step 1: Data Preparation
Data preprocessing:
Tokenize sentences into words and tags.
Create a vocabulary of words and part-of-speech tags.
Perform padding or truncation of sentences to a fixed length if necessary.
Step 2: Model Architecture
Sequence-to-Sequence Model:
Create a sequence-to-sequence model with an encoder-decoder architecture.
Choose the architecture for the encoder and decoder:
Encoder: (Specify the encoder type) processes the input sentence, and it can be a bidirectional
RNN or a transformer encoder.
Decoder: (Specify the decoder type) generates a sequence of part-of-speech tags, and it can
be a unidirectional RNN or a transformer decoder.
Output Layer: Use a softmax layer at the output of the decoder to predict the part-of-speech
tag for each word in the input sentence. The number of output units should match the number
of unique part-of-speech tags in your dataset.
Step 3: Training
Loss Function:
Use categorical cross-entropy loss to compute the error between predicted part-of-speech tags
and ground truth tags.
Optimizer:
Utilize an optimizer, such as Adam or SGD, to minimize the loss during training.
Step 4: Evaluation
Testing:
Evaluate the trained model on the test dataset to assess its accuracy and performance.
Performance Metrics:
Use metrics like accuracy, precision, recall, and F1-score to measure the quality of part-of-
speech tagging.
Program:
Machine Translation Using Encode – Decoder model:
Program:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('/content/drive/MyDrive/deep learning/deu.txt',sep='\t')
df.head(5)
Output:

data_path = '/content/drive/MyDrive/deep learning/deu.txt'


df.columns = ["English", "French","text"]

# Print the first 10 rows


df.head(10)
Output:
df.drop('text',axis=1,inplace=True)
df.head(10)

Output:

from tensorflow.keras.models import Model


from tensorflow.keras.layers import Input,LSTM,Dense

batch_size=64
epochs=100
latent_dim=256 # here latent dim represent hidden state or cell state
num_samples=10000
# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, 'r', encoding='utf-8') as f:
lines = f.read().split('\n')
for line in lines[: min(num_samples, len(lines) - 1)]:
input_text, target_text, _ = line.split('\t')
# We use "tab" as the "start sequence" character
# for the targets, and "\n" as "end sequence" character.
target_text = '\t' + target_text + '\n'
input_texts.append(input_text)
target_texts.append(target_text)
for char in input_text:
if char not in input_characters:
input_characters.add(char)
for char in target_text:
if char not in target_characters:
target_characters.add(char)
input_characters=sorted(list(input_characters))
target_characters=sorted(list(target_characters))

num_encoder_tokens=len(input_characters)
num_decoder_tokens=len(target_characters)

max_encoder_seq_length=max([len(txt) for txt in input_texts])


max_decoder_seq_length=max([len(txt) for txt in target_texts])
print('Number of samples:', len(input_texts))
print('Number of unique input tokens:', num_encoder_tokens)
print('Number of unique output tokens:', num_decoder_tokens)
print('Max sequence length for inputs:', max_encoder_seq_length)
print('Max sequence length for outputs:', max_decoder_seq_length)
Output:

input_token_index=dict([(char,i) for i, char in enumerate(input_characters)])


target_token_index=dict(
[(char,i) for i, char in enumerate(target_characters)])
encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens),
dtype='float32')
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')

for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):


for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.
encoder_input_data[i, t + 1:, input_token_index[' ']] = 1.
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.
decoder_input_data[i, t + 1:, target_token_index[' ']] = 1.
decoder_target_data[i, t:, target_token_index[' ']] = 1.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
encoder_states = [state_h, state_c]
decoder_inputs = Input(shape=(None, num_decoder_tokens))
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2)
Output:
model.save('eng-german.h5')
encoder_model = Model(encoder_inputs, encoder_states)
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(
decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
reverse_input_char_index = dict(
(i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
(i, char) for char, i in target_token_index.items())
for i in range(5):
input_seq = encoder_input_data[i:i+1]
decoded_sentence = decode_sequence(input_seq)
EX.No. 8 Image augmentation using GANs

print('-')
print("Input: ",input_texts[i])
print("Decoded sentence: ",decoded_sentence)
Output:

Result:
Thus the program was Machine Translation Using Encode – Decoder model was executed
and suceesfully.

Aim:
To write a python program of Image augmentation using GANs
Algorithm:
Step 1: Data Preparation
Collect and preprocess your original dataset of images for training the GAN.
Define the target augmentation transformations you want to achieve (e.g., rotation, scaling,
brightness changes, etc.).
Step 2: GAN Training
Generator Network:
Initialize a generator network (G) that will learn to generate augmented images. The
architecture of G should be appropriate for generating images similar to your original dataset.
Discriminator Network:
Initialize a discriminator network (D) to distinguish between real and generated images.
Step 3: Image Augmentation
Use the trained generator (G) to generate augmented images. You can apply specific
augmentation parameters (e.g., rotation angles, scaling factors) as input conditions to G to
control the type of augmentation.
Apply the generated augmented images to your original dataset as needed.
Step 4: Evaluation and Testing
Evaluate the quality of the augmented images and assess their impact on the performance of
your machine learning model (if used for model training).
Step 5: Fine-Tuning (Optional)
Optionally, fine-tune the GAN or retrain it with more specific augmentation requirements or
on additional data.
Step 6: Image Augmentation Using GANs Algorithm (Simplified)
Prepare your dataset and define desired augmentation transformations.
Train a GAN with a generator (G) and discriminator (D).

Program:
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten
from tensorflow.keras.layers import BatchNormalization, Dropout
from tensorflow.keras.layers import Conv2D, Conv2DTranspose
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
# Load the MNIST dataset
(x_train, _), (_, _) = mnist.load_data()
# Normalize and reshape the images
x_train = (x_train.astype('float32') - 127.5) / 127.5
x_train = np.expand_dims(x_train, axis=-1)
# Define the generator network
generator = Sequential()
generator.add(Dense(7 * 7 * 256, input_dim=100))
generator.add(Reshape((7, 7, 256)))
generator.add(BatchNormalization())
generator.add(Conv2DTranspose(128, kernel_size=5, strides=1, padding='same',
activation='relu'))
generator.add(BatchNormalization())
generator.add(Conv2DTranspose(64, kernel_size=5, strides=2, padding='same',
activation='relu'))
generator.add(BatchNormalization())
generator.add(Conv2DTranspose(1, kernel_size=5, strides=2, padding='same',
activation='tanh'))

# Define the discriminator network


discriminator = Sequential()
discriminator.add(Conv2D(64, kernel_size=5, strides=2, padding='same', input_shape=(28,
28, 1), activation='relu'))
discriminator.add(Dropout(0.3))
discriminator.add(Conv2D(128, kernel_size=5, strides=2, padding='same', activation='relu'))
discriminator.add(Dropout(0.3))
discriminator.add(Flatten())
discriminator.add(Dense(1, activation='sigmoid'))

# Compile the discriminator


discriminator.compile(loss='binary_crossentropy', optimizer=Adam(learning_rate=0.0002,
beta_1=0.5), metrics=['accuracy'])

# Combine the generator and discriminator into a single GAN model


gan_input = Input(shape=(100,))
gan_output = discriminator(generator(gan_input))
gan = Model(gan_input, gan_output)
gan.compile(loss='binary_crossentropy', optimizer=Adam(learning_rate=0.0002,
beta_1=0.5))

# Training hyperparameters
epochs = 10
batch_size = 128
sample_interval = 10

# Training loop
for epoch in range(epochs):
# Randomly select a batch of real images
idx = np.random.randint(0, x_train.shape[0], batch_size)
real_images = x_train[idx]
# Generate a batch of fake images
noise = np.random.normal(0, 1, (batch_size, 100))
fake_images = generator.predict(noise)
# Train the discriminator
x = np.concatenate((real_images, fake_images))
y = np.concatenate((np.ones((batch_size, 1)), np.zeros((batch_size, 1))))
d_loss = discriminator.train_on_batch(x, y)
# Train the generator
noise = np.random.normal(0, 1, (batch_size, 100))
g_loss = gan.train_on_batch(noise, np.ones((batch_size, 1)))

# Print the progress and save samples


if epoch % sample_interval == 0:
print(f'Epoch: {epoch} Discriminator Loss: {d_loss[0]} Generator Loss: {g_loss}')
samples = generator.predict(np.random.normal(0, 1, (16, 100)))
samples = (samples * 127.5) + 127.5
samples = samples.reshape(16, 28, 28)
fig, axs = plt.subplots(4, 4)
count = 0
for i in range(4):
for j in range(4):
axs[i, j].imshow(samples[count, :, :], cmap='gray')
axs[i, j].axis('off')
count += 1
plt.show()
Output:
Result:
Thus the program was Machine Translation Using Encoder – Decoder model was
executed and successfully.

You might also like