Report Batch 10
Report Batch 10
A PROJECT REPORT
Submitted by
BACHELOR OF ENGINEERING
IN
MAY 2024
ANNA UNIVERSITY: CHENNAI 600 025
BONAFIDE CERTIFICATE
Certified that this project report “AI POWERED RECOMMENDER SYSTEM WITH
DEEP LEARNING FOR RETAIL TRANSACTION” is the bonafide work of “BABY
SHALINI C (920820104009), FAHMITHA SIRIN N (920820104019),
MADHUMITHA J (920820104031), SUSMITHA N (920821104056) ” who carried out
the project work under my supervision.
SIGNATURE SIGNATURE
Dr. S. M. VIJAYARAJAN M.E., Ph.D., Dr. M. INDRA DEVI M.E., (Ph.D.),
HEAD OF THE DEPARTMENT SUPERVISOR
Professor, Professor,
Computer Science and Computer Science and
Engineering, Engineering,
NPR College of Engineering NPR college of Engineering
and Technology, and Technology,
Natham, Natham,
Dindigul – 624001. Dindigul – 624001.
DECLARATION
submitted by me/ourselves in partial fulfillment of the requirement for the award of the
degree of bachelor of
Engineering / Technology, from NPR College of Engineering and Technology (An
Autonomous Institution)
Affiliated to Anna University, Chennai), Natham, Dindigul is my/our own work. The report
has not been
submitted for the award of any other degree / diploma of this university or any other
university before.
Place: Name:
Date:
https://scontent.cdninstagram.com/v/t51.2885-
19/487116675_2458252527874277_1591752614792710513_n.jpg?stp=dst-
jpg_s150x150_tt6&_nc_cat=109&ccb=1-7&_nc_sid=f7ccc5&_nc_ohc=h1mSX-
NaqKAQ7kNvwFPq0GQ&_nc_oc=Adk98IoR-
4Esn65N1XQa02Da8ZtDZWGR7cBm901nU0Jjw7Y8Nm39zN_d3dPUAq8cPv_xvOhj
XBNzQDWH6VrsoRNc&_nc_ad=z-
m&_nc_cid=1174&_nc_zt=24&_nc_ht=scontent.cdninstagram.com&oh=00_AfHKQb
Sk_18AtCXb5kq4DLHYoRo4mzZ0Qvp9t5xUcRfUGw&oe=680B0FFC Reg.No:
ACKNOWLEDGEMENT
First and foremost, we praise and thank nature from the depth of my heart which
has given us an immense source of strength, comfort, and inspiration in the completion
of this project work.
We extend our gratitude to our Head of the Department of Computer Science and
Engineering Dr. S. M. VIJAYARAJAN M.E., Ph.D., professor for providing
constructive suggestions and his sustained encouragement all through this project.
We express our graceful thanks to our Project Guide Dr. M. INDRA DEVI
M.E., Ph.D., Assistant Professor for their valuable technical guidance, patience and
motivation, which helped us to complete this project in a successful manner.
Also, we would like to record our deepest gratitude to our parents for their
constant encouragement and support which motivated us to complete our project.
NPR COLLEGE OF ENGINEERING & TECHNOLOGY DEPARTMENT
OF COMPUTER SCIENCE AND ENGINEERING
Vision
• To develop students with intellectual curiosity and technical expertise to
meet the global needs.
Mission
• To achieve academic excellence by offering quality technical education
using best teaching techniques.
• To improve Industry Institute interactions and expose industrial atmosphere.
• To develop interpersonal skills along with value-based education in a
dynamic Learning environment.
• To explore solutions for real time problems in the society.
Vision
• To produce globally competent technical professionals for digitized society.
Mission
• To establish conducive academic environment by imparting quality
education and value added training.
• To encourage students to develop innovative projects to optimally resolve
the Challenging social problems.
NPR COLLEGE OF ENGINEERING & TECHNOLOGY DEPARTMENT OF
COMPUTER SCIENCE AND ENGINEERING PROGRAM OUTCOMES
(PO)
PO1: Engineering knowledge: Apply the knowledge of mathematics, science,
engineering fundamentals, and an engineering specialization to the solution of complex
engineering problems.
PO2: Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
PO5: Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex
engineering activities with an understanding of the limitations.
PO6: The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.
PO9: Individual and team work: Function effectively as an individual, and as a member or
leader in diverse teams, and in multidisciplinary settings.
PO11: Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member
and leader in a team, to manage projects and in multidisciplinary environments.
PO12: Life-long learning: Recognize the need for, and have the preparation and ability to
engage independent and life-long learning in the broadest context of technological change.
NPR COLLEGE OF ENGINEERING & TECHNOLOGY DEPARTMENT
OF COMPUTER SCIENCE AND ENGINEERING COURSE OUTCOMES,
PROGRAM EDUCATIONAL OBJECTIVES &
COURSE OUTCOMES
C411.1: Identify technically and economically feasible problems of social relevance.
C411.2: Plan and build the project team with assigned responsibilities.
C411.3: Identify and survey the relevant literature for getting exposed to related
solutions.
C411.4: Analyze, design and develop adaptable and reusable solutions of minimal
complexity by using modern tools.
C411.5: Implement and test solutions to trace against the user requirements.
The increasing demand for personalized recommendations in retail has led to the adoption of
deep learning models in recommendation systems. While much research focuses on model
optimization and fine-tuning, the integration of an end-to-end data pipeline is often
overlooked. The proposed project works on various algorithms such as FeedforwardNN,
Neural Collaborative Filtering (NCF), DeepMF for recommendation. This paper presents a
comprehensive framework for developing a deep learning-based recommendation system
using a retail transaction dataset. The proposed pipeline includes data storage, extraction,
transformation, and loading (ETL), business intelligence, model training, and incremental
learning. We demonstrate the effectiveness of this approach through a case study involving
a retail transaction dataset, highlighting the importance of a holistic data pipeline in
improving model performance and scalability. This paper presents an AI-powered
recommendation system for retail transactions, comparing four deep learning algorithms.
The NeuralCF model demonstrated superior performance, achieving 92%, making it ideal
for real-world applications. The proposed system effectively integrates ETL processing,
deep learning-based training, and automated recommendation generation, ensuring high
adaptability and accuracy in retail analytics.
LIST OF ABBREVIATIONS xv
1. INTRODUCTION 1
1.1 Overview 1
1.2 Project Description 1
1.3 Methodology 3
1.3.1 Dataset Selection 3
1.3.2 Preprocessing 3
1.3.3 Feature Extraction & Model 4
1.3.4 Training & Classification 4
1.4 Neural Collaborative Network 5
2. LITERATURE SURVEY 7
3. EXISTING SYSTEM 10
3.1 Overview 10
3.2 Disadvantages 11
4. SYSTEM STUDY 12
4.1 Technical Feasibility 13
4.2 Economic Feasibility 13
4.3 Operational Feasibility 14
5. PROPOSED SYSTEM 15
5.1 Overview 15
5.2 Advantages 15
6. SYSTEM SPECIFICATION 16
6.1 Hardware Requirements 16
6.1.1 Processor 17
6.1.2 RAM 17
6.1.3 Hard Disk 18
6.1.4 Display 18
6.1.5 GPU 18
6.2 Software Requirements 19
6.2.1 Front End 19
6.2.2 Back End 22
6.2.3 Libraries and Frameworks 22
6.2.4 Operation System 22
6.2.5 Server 22
6.2.6 Python Version 23
6.2.7 Browser Compatibility 23
6.2.8 Security Tools 23
6.3 Diagrammatic Representation 24
SYSTEM DESIGN 25
7.1 System Design 25
7.
7.2 System Architecture 25
8. SYSTEM IMPLEMENTATION 29
8.1 Modules 29
8.1.1 Data Collection & Understanding 29
8.1.2 Data Preprocessing &Transformation 30
8.1.3 Model Building & Trainning 30
8.1.4 Model Evaluation & Selection 30
8.1.5 Recommendation Generation 31
8.2 User Interface and Backend Integration
31
8.2.1 Graphical User Interface
31
8.2.2 Database Integration
31
8.2.3 User Authentication system
31
8.2.4 Recommendation history &Export
32
8.2.5 Feedback &Active Learning
32
8.2.6 Cloud Hosting & Mobile expansion
32
FUTURE ENCHANCEMENT 38
11.1 Conclusion 38
Sample Screenshot 39
APPENDIX 2 46
Sample Code 46
REFERENCES 58
LIST OF ABBREVIATION
ACRONOYMS ABBREVIATIONS
DL Deep Learning
HD High Dimension
This project presents a personalization is a key driver of user engagement and satisfaction.
In today's rapidly evolving digital landscape, personalization stands as a cornerstone for
enhancing user engagement and satisfaction. In the digital age, personalization is a key
driver of user engagement and satisfaction. Recommendation systems have become essential
tools for businesses, helping users discover relevant products based on their interests and
behavior.
Deep learning has revolutionized the way these systems are developed by offering more
accurate, scalable, and automated models that can learn complex patterns from large
datasets. This project focuses on developing a deep learning-based recommendation engine
using retail transaction data. Businesses across industries are increasingly adopting
recommendation systems to connect users with products, services, or content that align with
their preferences and behavioral patterns.
These systems not only improve user experience but also significantly boost customer
retention and revenue generation. With the advancement of artificial intelligence,
particularly *deep learning techniques*, recommendation systems have witnessed a
transformative shift. Traditional recommendation approaches, like collaborative filtering
and content-based filtering, are now being replaced or enhanced by deep learning models
that can capture complex, non-linear relationships within massive datasets. Deep learning
models enable automated feature extraction, better scalability, and more precise predictions.
This project centers on designing and developing a deep learning-based recommendation
engine using retail transaction data. By harnessing the capabilities of advanced deep
learning architectures, we aim to build a system that can provide highly personalized
product suggestions to users, thereby enriching their shopping experience.
1
PROJECT DESCRIPTION
The system is trained and tested on a structured retail transaction dataset, where user
purchasing history is utilized to predict future interests.
The primary evaluation criteria for model performance are Root Mean Square
Error (RMSE) and Mean Absolute Error (MAE).
2
1.1 METHODOLOGY:
The system architecture consists of a data pipeline, a model training framework,
and a web-based user interface built with Flask (backend) and React.js (frontend).
The development process of the recommendation system is structured into multiple
key phases:
The dataset used for this project is a Retail Transaction Dataset, which includes
the following attributes:
Total_Cost
Season
Store_Type
Customer_Category
This dataset provides enough information to create user-item interaction matrices for model
training. This dataset provides enough information to create user-item interaction matrices
for model training.
1.1.2 Preprocessing
User and Product IDs are transformed into dense vectors using embedding layers.
Each model combines user and item embeddings and passes them through
multiple fully connected layers to predict the likelihood of a user purchasing a
product.
A basic multilayer perceptron model that combines user and item embeddings and
passes them through several fully connected layers to predict interaction scores.
4
Deep Matrix Factorization (DeepMF):
This model combines the power of matrix factorization with deep networks, allowing
for more flexible and deeper learning of latent factors from sparse interaction data.
Each model processes user and product embeddings through dense layers and predicts
the likelihood of purchase.
Each model is trained for 100 epochs using Mean Squared Error (MSE) as the loss
function and Adam optimizer with a learning rate of 0.0005. After training, the model's
performance is evaluated using:
The model with the lowest RMSE is selected as the best performing model for generating
recommendations. Each model is trained for *100 epochs* using *Mean Squared Error
(MSE)* as the loss function, ensuring that the model learns to minimize the squared
differences between actual and predicted values.
- The *Adam optimizer* with a learning rate of *0.0005* is employed for efficient gradient
descent.
- The models are evaluated using *Root Mean Square Error (RMSE)* and *Mean Absolute
Error (MAE)* to measure prediction accuracy.
- The model achieving the *lowest RMSE value* is considered the *best performing model*
5
and is selected for integration into the final recommendation system.
6
1.1 NEURAL COLLOBORATIVE FILTERING (NCF):
7
CHAPTER 2
LITERATURE SURVEY
The field of recommender systems has rapidly evolved with the integration of
deep learning techniques, enabling more personalized, context-aware, and
accurate recommendations. This chapter presents a review of significant works in
the area of deep learning-based recommender systems, focusing on models,
trends, and advancements that align with the objectives of our AI-powered
recommendation system using a retail transaction dataset.
Author: Caiwen Li, Iskandar Ishak, Hamidah Ibrahim, Maslina Zolkepli, Fatimah Sidi,
Caili Li
9
1.6 Title : Deep Learning-Based Recommendation System: Systematic Review and
Classification
Authors: Caiwen Li, Iskandar Ishak, Hamidah Ibrahim, Maslina Zolkepli, Fatimah Sidi
Reference: IEEE Access, Vol. 11, pp. 113790–113835, 2023
This paper provides a comprehensive systematic review and taxonomy of deep learning-
based recommender systems. The study classifies existing models into categories such as
collaborative filtering, content-based, hybrid, and sequential recommenders using deep
learning paradigms like CNNs, RNNs, Autoencoders, and GANs. It also compares models
in terms of accuracy, scalability, interpretability, and cold-start adaptability.
1.7 Title: Fusing User Preferences and Spatiotemporal Information for Sequential
Recommendation
Authors: Sizhe Yin, Yang Xia, Yujing Liu, Songhe Han, Zijie Ouyang
Reference: IEEE Access, Vol. 10, pp. 89545–89554, 2022
This paper explores the integration of spatiotemporal data with user preferences to enhance
sequential recommendation accuracy. It proposes a dual attention mechanism to capture
both temporal dynamics and spatial behavior of users, enabling more context-aware
predictions. The study demonstrates that combining these factors significantly improves the
recommendation performance for time-sensitive platforms.
Author: FuliZhang
Reference :IEEE Access, Vol. 4, pp. 2714–2720, 2016
This work proposes a time-sequence-based algorithm tailored for digital libraries, focusing
on personalized recommendations through a user's reading history over time. It uses a
probabilistic model to identify patterns in sequential data and generate context-aware book
recommendations.
10
1.9 Title : Exploring the Landscape of Hybrid Recommendation Systems in E-
Commerce
Authors: Kailash Chowdary Bodduluri, Arianit Kurti, Francis Palma, Ilir Jusufi, Henrik
Löwenadler
Reference :IEEE Access, Vol. 12
This systematic literature review investigates the evolution and challenges of hybrid
recommender systems that combine multiple techniques (collaborative, content-based,
knowledge-based) for better personalization in e-commerce. The paper discusses
hybridization strategies, fusion models, and system evaluations to enhance recommendation
diversity and reduce cold-start issues.
This study addresses the cold-start problem in recommender systems by utilizing stereotype-
based user profiling combined with deep learning models. It proposes a mechanism to
cluster new users based on predefined stereotypes and adapt personalized recommendations
accordingly using neural models.
11
CHAPTER 3
EXISTING
SYSTEM
3.1 Overview
These methods have powered major platforms for years but are becoming increasingly
insufficient due to the explosion of retail data, dynamic user behaviors, and diverse product
catalogs.
Moreover, classical machine learning models such as Decision Trees, k-Nearest Neighbors
(k-NN), and basic Matrix Factorization techniques have been used. However, they fail to
model complex relationships like sequential purchasing patterns, temporal behaviors, and
evolving user preferences effectively.
3.2 DISADVANTAGES:
12
o When a new user registers or a new product is added, traditional models have
no prior interactions to base recommendations on..
Sparsity Problem:
o User-Item interaction matrices are extremely sparse (very few users interact
with a large number of items), leading to reduced recommendation accuracy.
Limited Understanding of Context:
o Traditional models treat all user interactions as independent and ignore the
sequential nature of purchases or preferences evolving over time.
Scalability Challenges:
o As datasets grow into millions of users and products, the computation required
for real-time recommendations becomes impractical.
Static Nature:
o Recommendations are often generated based on static historical data without
incorporating the user's latest behaviors.
Low Personalization:
o Cannot capture personalized patterns like seasonal interests, brand preferences,
or buying patterns over a timeline.
Underutilization of Data:
o Modern transaction datasets contain rich features (like season, promotion, store
type), but traditional systems mostly use only basic user-product interactions.
recommendations without critical mass.
SYSTEM STUDY
The proposed system uses Python (Flask) for the backend and React.js for the
frontend, both of which are industry-standard technologies with strong community
support. The deep learning models are built using PyTorch, a flexible and powerful
framework for neural network training. The retail transaction dataset is stored and
processed locally, ensuring data privacy and faster access.
14
Technical Feasibility
Programming Frameworks:
o Backend: Flask (Python lightweight web framework) is used for API
development and serving recommendations.
o Frontend: React.js for building a dynamic, fast, and user-friendly web
interface.
o Machine Learning: PyTorch for designing and training Deep Learning
models.
Feasibility Analysis:
o Libraries like PyTorch, scikit-learn, pandas, numpy, Flask, and Chart.js are
open-source and extensively documented.
o No proprietary tools or expensive licenses required.
Integration:
o RESTful APIs are designed to seamlessly connect the React frontend with
the Flask backend.
o Models are stored in .pth format and can be loaded without retraining every
time.
Conclusion: The technical aspects of the project are highly feasible using existing tools.
Economic Feasibility
Development Cost:
o All tools and libraries used are free and open-source.
o The system can be developed and deployed with minimal hardware
investment.
Deployment Cost:
o Hosting platforms like Railway.app, Render, or AWS Free Tier can host
both the frontend and backend initially at zero or minimal cost.
o Storage for models (.pth files) is lightweight (~10-100 MB per model).
15
Cost Efficiency:
o Avoids the need for expensive GPU-based servers post-training (models are
inference-optimized).
Operational Feasibility
16
CHAPTER 5
PROPOSED SYSTEM
5.1 Overview
Core Models:
17
Neural Collaborative Filtering (NCF)
Pipeline Workflow:
18
Data Ingestion & ETL: Intelligent preprocessing that normalizes, cleans, and
enriches raw transaction data.
Model Training & Evaluation: Automated, parallel training with continuous
evaluation against key metrics (RMSE, MAE).
Performance Visualization: Dynamic dashboards showcasing model precision
and loss trends.
Real-time Recommendation Engine: On-demand personalized recommendations
delivered in milliseconds upon username input.
5.2Advantages
19
CHAPTER 6
SYSTEM SPECIFICATION
System specification forms the foundation and infrastructure required to support the
complete life cycle of your AI-powered recommendation system — including data
handling, model training, evaluation, and end-user interaction via the web interface. The
proposed system leverages modern tools such as Flask (Python) for backend APIs,
React.js for frontend visualization, and PyTorch for model development and training.
This section outlines all critical hardware and software components required to ensure
smooth development, deployment, and scalability of the application across devices and
environments.
6.1.2 RAM
Minimum: 8 GB
Recommended: 16 GB or higher
20
6.1.3 Storage
SSD storage radically cuts data loading times and accelerates development cycles.
6.1.4 Display
Figure 6.1
21
6.2 Software Requirements
A carefully curated open-source tech stack to foster rapid innovation and future
scalability.
6.2.2 Backend
22
6.2.6 Python Environment
Figure 6.2
23
CHAPTER 7
SYSTEM DESIGN
System design serves as the architectural blueprint that translates user expectations into
structured, efficient, and scalable components. For the project titled "AI-Powered Retail
Recommender System using Deep Learning", the system has been thoughtfully
architected to offer intelligent product suggestions based on a user's purchase history and
transaction behavior. The design ensures modularity, maintainability, and data-driven
decision-making by incorporating layered architecture and modern full-stack
technologies.
The entire pipeline—from data ingestion and preprocessing to model training, storage,
and real-time recommendation delivery—has been modularized to support easy updates,
testing, and scaling.
Modularity: Each major process (ETL, Model Training, API, Frontend) is isolated
into independent components.
Abstraction: Complex ML logic is hidden behind well-documented APIs.
Reusability: Models and data pipeline components can be reused for different
recommendation scenarios.
Scalability: Designed to support increasing dataset size or additional users/items.
Security: Data access is managed with secure endpoints and potential for future
user authentication.
24
7.2 System Architecture Layers:
25
4.Data Layer
Categorical Encoding: Label Encoding for transforming user and product IDs.
Numerical Scaling: MinMaxScaler normalizes continuous features like "Total
Cost" into bounded ranges.
Tensor Conversion: DataFrames are transformed into tensors, ready for deep
learning ingestion.
26
8. Recommendation Engine
Dynamic Model Loading: Auto-loads the best model based on the lowest RMSE
score.
User-centric Prediction: Accepts a user name input, returning top 5 personalized
product suggestions instantly.
27
CHAPTER 8
8: SYSTEM IMPLEMENTATION
8.1 Modules
Module Overview
To ensure modularity, maintainability, and scalability, the project is structured into the
following functional modules:
Customer_Name
Product
Total_Items
Total_Cost
Payment_Method
Store_Type
City
Discount_Applied
Promotion
Season
29
8.1.2 Data Preprocessing & Transformation
Raw data must be cleaned, transformed, and normalized for effective model training.
Steps Involved:
This transformation ensures deep learning models can ingest and learn from structured
inputs.
Three deep learning architectures are implemented using PyTorch, each offering unique
strengths:
Learns embeddings for users and products and maps them via deep layers.
Performs well with sparse data and generalizes to cold-start problems.
After training, each model is evaluated using the test set with these metrics:
The model with the lowest RMSE is selected as the best recommender for generating
predictions.
Results are stored in a model_results.json file for future access and recommendation
loading without retraining.
Once trained, the selected model (based on RMSE) is used to generate recommendations
for any user in the dataset.
Steps:
31
8.2 User Interface and Backend Integration
The frontend is responsive and styled using custom CSS with enhancements for
accessibility.
Currently, the project uses CSV files for storage. Future enhancements include:
33
CHAPTER 9
RESULT AND
DISCUSSION
9.1Deep Learning Model Performance Overview
9.1.2Dataset Description
The dataset used in this project contains anonymized retail transaction data with the
following fields:
Customer Name
Product
Transaction Date
Total Cost
Total Items
Payment Method
Store Type
Customer Category
Discount Applied
Season
34
Promotion Type
The dataset includes over 300,000 transaction records covering a wide variety of
products and user behaviors. It reflects realistic purchasing patterns across seasons,
promotions, customer demographics, and store types.
Users: 329,000+
Items: 570,000+
Transactions: 300,000+
Features Used: Total Cost, Product, Customer Category, Store Type, Season
After training all three models with the dataset, the following evaluation metrics were
obtained:
9.2 Observations:
NeuralCF outperformed the other models with the lowest RMSE and MAE,
indicating superior prediction quality.
FNN showed reliable performance with slightly higher error margins.
DMF showed relatively higher error rates, possibly due to data sparsity in certain
user-item combinations.
36
9.2.2 Robustness Scenarios Evaluated:
The system can scale efficiently and is compatible with deployment on platforms like
Render, Heroku, or AWS Lambda for lightweight API deployment.
While recommendation systems do not raise the same level of concern as biometric
detection systems, there are still ethical challenges to consider:
User Privacy: All user identifiers in the dataset were anonymized. No personal
data is stored in the deployed system.
Bias Mitigation: Training was performed on a diverse dataset to ensure no user
segment (e.g., customer category, region, gender) received biased
recommendations.
Transparency: The recommendation process is explainable — predictions are
based solely on transaction data.
Security: The backend is protected with CORS, and future implementations may
integrate JWT authentication and encryption for sensitive endpoints.
37
9.2.5 Conclusion from Experimental Results
This project demonstrates that Neural Collaborative Filtering (NCF) is the most
effective deep learning model among the three tested, achieving an accuracy of over
85%. The results validate the feasibility of deploying a deep learning-based
recommendation engine in real-world retail environments.
Figure 9.1
38
CHAPTER 10
SYSTEM TESTING
System testing is an essential phase of the project that ensures the end-to-end functionality,
reliability, performance, and security of the AI-powered recommender system. This phase
verifies that both the backend (Flask, PyTorch) and frontend (React.js) components work
seamlessly together. Each test type is performed to evaluate a different aspect of the
application and ensure that it is ready for deployment.
Scope:
Purpose: To ensure that modules like the model, dataset, and API routes interact
correctly.
Scope:
39
o Verifying that data from the React UI flows correctly through the Flask API and
back.
o Checking if model results (model_results.json) are used in recommendation
logic without inconsistency.
Scope:
Scope:
o Measuring the training time per model (e.g., NeuralCF, FNN, DMF).
o Benchmarking response time of the /recommend endpoint under varying input
sizes.
o Measuring frontend rendering time for visualizing large charts.
Tools Used: time, browser dev tools, Python profiling tools (cProfile, timeit).
40
10.5 Accuracy Testing
o Ensuring at least one model (NeuralCF in this case) reaches or exceeds 85%
accuracy, making it viable for production use.
o Repeated testing with various slices of the dataset to check for consistency in
predictions.
Scope:
Scope:
41
Feedback Loop: Conducted with users from both technical and non-technical
backgrounds.
Purpose: To evaluate the system’s behavior under high loads or constrained resources.
Scope:
Purpose: To ensure that new updates do not break previously functioning features.
Scope:
42
CHAPTER 11
CONCLUSION AND FUTURE ENHANCEMENT
11.1Conclusion
This recommendation engine shows that deep learning-based approaches can dramatically
improve the accuracy and adaptability of personalized recommendation systems in the retail
domain.
43
11.2Future Work
Integrate XAI (Explainable AI) tools to let users know why a specific product was
recommended.
Useful for improving trust and user adoption.
Add user login, role-based access, and recommendation history tracking for
enterprise use cases.
Cross-Domain Testing
44
APPENDIX 1
Sample Screenshot
45
46
47
48
49
50
51
Result page :
52
53
APPENDIX 2
Sample Code
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, mean_absolute_error
import matplotlib.pyplot as plt
from models.recommender import get_deep_model # Your model definitions here
# Load dataset
df = pd.read_csv("datasets/Retail_Transactions_Dataset.csv")
df['user_id'] = df['Customer_Name'].astype('category').cat.codes
df['item_id'] = df['Product'].astype('category').cat.codes
results = []
criterion = nn.MSELoss()
loss_list = []
model.train()
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_list.append(loss.item())
# Evaluation
Model.eval()
55
with torch.no_grad():
labels = test_targets.numpy()
torch.save(model.state_dict(), f"models/{model_name}.pth")
# Plot loss
plt.plot(loss_list, label=model_name)
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend()
plt.grid(True)
plt.show()
# Print results
Frontend (React)
56
import React from "react";
function App() {
return (
<Router>
<nav>
</nav>
<Routes>
</Routes>
</Router>
);
TrainModel
return (
<div>
58
Recommendations
function Recommendations() {
const [userName, setUserName] = useState("");
const [products, setProducts] = useState([]);
return (
<div>
<input value={userName} onChange={(e) => setUserName(e.target.value)} />
<button onClick={fetchRecommendations}>Get Recommendations</button>
<ul>
{products.map((p, i) => <li key={i}>{p}</li>)}
</ul>
</div>
);
}
Backend (Flask)
59
from flask import Flask
from routes.model_routes import model_blueprint
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
if __name__ == "__main__":
app.run(debug=True)
DATASET_PATH = "datasets/Retail_Transactions_Dataset.csv"
df = pd.read_csv(DATASET_PATH)
users = {user: idx for idx, user in enumerate(df["Customer_Name"].unique())}
items = {item: idx for idx, item in enumerate(df["Product"].unique())}
@model_blueprint.route("/train_all", methods=["POST"])
def train_all():
results = train_all_models()
return jsonify(results)
60
@model_blueprint.route("/recommend", methods=["POST"])
def recommend():
data = request.get_json()
user_name = data.get("user_name")
if user_name not in users:
return jsonify({"error": "User not found"}), 404
with open("models/model_results.json") as f:
results = json.load(f)
best_model = min(results, key=lambda x: x["RMSE"])["Model"]
model = load_model(best_model, len(users), len(items))
61
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
62
NPR COLLEGE OF ENGINEERING & TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE AND
ENGINEERING PROGRAM OUTCOMES (PO)
MAPPING
63
PO6 Addressed real-world retail personalization needs
by improving product discovery through AI-
driven recommendations for end users and
businesses.
PO7 Promoted efficient resource usage by optimizing
model parameters for faster inference and
training time on local hardware and scalable
cloud platforms..
PO8 Followed ethical guidelines in data usage and AI
implementation by ensuring anonymization,
privacy, and responsible recommendation
practices
PO9 Collaborated effectively as a team to manage
the backend API integration, frontend UI
components, and model comparison logic
cohesively..
PO10 Demonstrated professional communication by
creating technical documentation, project reports,
and delivering visual presentations on model
performance.
PO11 Managed all stages of the project lifecycle—
from requirement analysis, dataset
preprocessing, model training, and integration,
to deployment and UI testing.
PO12 Acquired practical skills in deep learning, full-stack
development, and system deployment, contributing
to continuous learning and future career
advancement. |
Tackled a real-time retail personalization
PSO1 challenge by applying AI and deep learning
models to user-product interaction data.
Demonstrated practical use of open-ended
programming strategies in a research-oriented
development environment.
Utilized modern programming languages (Python)
PSO2 and platforms (Flask, PyTorch, ReactJS) while
maintaining ethical standards in responsible AI
model training and deployment for
recommendation systems.
PSO3 Strengthened technical expertise through additional
training and certifications in deep learning, full-stack
development, and AI tools relevant to recommender
systems, model evaluation, and user experience
design.
64
REFERENCES
67