BE Fourth Year Computer Engineering
PROJECT SYNOPSIS
ON
Subjective Answers Evaluation Using Machine Learning and
Natural Language Processing
Submitted by
COBC56 : Prasad Sudhir Chavan
COBC57 : Linmay Diwakar Patil
COBC44: Suraj Shivshankar
Biradar COBC14: Chetan Sandeep
Jagtap
Guided by
Mrs N.A Inamdar
B.E. (Comp)- 2024-25
DEPARTMENT OF COMPUTER ENGINEERING
STES’S SINHGAD ACADEMY OF ENGINEERING
KONDHWA, PUNE 411048
UNIVERSITY OF PUNE
2024-25
Abstract:
Subjective questions and responses might provide an open-ended evaluation of a student's
performance and aptitude. Naturally, the answers are not limited in any way, and Students are
able to compose them in whatever way that best suits their perspective and conceptual grasp.
Having said that, a few other crucial variations separate their arbitrary responses from their
impartial substitute. To start with, they are far longer than the impartial inquiries. Furthermore,
they require more time to Pen. They also need a lot more work and carry a lot more context.
lack focus and impartiality on the part of the instructor assessing they.
Introduction:
Evaluating subjective papers by hand is a difficult and time-consuming undertaking. One of the
biggest obstacles to employing artificial intelligence (AI) for the subjective paper analysis
process is a lack of comprehension and acceptance of the findings. There have been several
attempts to use computer science to grade students' responses. To do this, the majority of the
job, however, makes use of conventional counts or certain terms. Additionally, vetted data sets
are also lacking. In order to automatically evaluate descriptive responses, this paper suggests a
novel method that makes use of a variety of machine learning, natural language processing, and
toolkits, including Word net, Word2vec, word mover's distance (WMD), cosine similarity,
multinomial naive bayes (MNB), and term frequency-inverse document frequency (TF-IDF).
Answers are assessed using keywords and solution statements, as well as a machine learning
algorithm.
Related work:
1. Automated Essay Scoring (AES):
PEG, e-rater, IEA: Early and widely-used systems that automate essay scoring using
basic features and semantic analysis.
2. NLP Techniques:
BERT, Word2Vec: Advanced models that enhance understanding of text content
and context.
3. ML Algorithms:
SVM, Random Forests, Deep Learning: Commonly used for evaluating subjective
answers based on linguistic and semantic features.
4. Bias and Fairness:
Research aims to detect and minimize biases in automated scoring systems.
5. Applications:
Used in MOOCs for large-scale grading and recruitment for evaluating candidate
responses.
6. Ethical Considerations:
Emphasis on transparency and explainability in ML models for education.
Project Objective:
The objective is to develop an automated system using ML and NLP to accurately and fairly
evaluate subjective answers, providing scores and feedback that align with human judgment,
while minimizing biases and improving efficiency in assessment processes.
Hardware/Software Requirement: Hardware Requirements
Hardware:
Processor: Multi-core CPU (e.g., Intel i7) or GPU (e.g., NVIDIA GTX 1080) for deep
learning.
RAM: Minimum 16 GB, recommended 32 GB.
Storage: SSD with at least 512 GB.
OS: Linux, Windows, or macOS.
Software:
Languages: Python 3.7+.
Libraries: Scikit-learn, TensorFlow, PyTorch, NLTK, SpaCy, Transformers.
IDE: Jupyter Notebook, PyCharm, VS Code.
Tools: Git for version control, Docker for containerization.
Date:
Mrs N.A Inamdar Ms. S.B.Ghawate Mr.S.N.Shelke
Guide Project Coordinator H.O.D (Comp)