Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
6 views39 pages

Sepm Notes

Uploaded by

prafulprasad911
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views39 pages

Sepm Notes

Uploaded by

prafulprasad911
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

1.

Explain Umbrella Activities in Detail

The umbrella activities in software engineering are supporting activities that are carried out
throughout the software development life cycle (SDLC). They ensure that the software project
stays on track, maintains quality, and adapts to changes. Here is a simplified explanation of each
umbrella activity based on your given points:

🔹 1. Software Project Tracking and Control


 Purpose: To monitor the progress of the project.
 What happens: A development plan is made at the beginning. As the project continues,
progress is checked regularly.
 Why important: If delays or issues are found, corrective actions can be taken (like
rescheduling tasks).
 Example: If testing takes longer than expected, the timeline for the next phase might
need to be adjusted.

🔹 2. Risk Management
 Purpose: To handle uncertainties that may affect the project.
 What happens:
o Identify possible risks (e.g., team members leaving, tech issues).
o Analyze how likely they are and how serious they’d be.
o Make a backup plan (contingency) in case they happen.
 Example: If a developer might quit mid-project, have another team member ready to step
in.

🔹 3. Software Quality Assurance (SQA)


 Purpose: To ensure the software meets quality standards.
 What happens: A dedicated team tests and verifies things like performance, usability,
etc., at different stages.
 Why important: Helps catch problems early, so developers aren’t overwhelmed at the
end.
 Example: Performance testing is done midway rather than waiting till the end.

🔹 4. Technical Reviews
 Purpose: To find and fix errors early.
 What happens: After finishing a module, a team reviews the code/work to catch
mistakes before moving forward.
 Why important: Prevents bugs from spreading to later parts of the project.
 Example: Reviewing a login module before integrating it with the main system.
🔹 5. Measurement
 Purpose: To collect useful data about the software process and product.
 What happens: Metrics are gathered (like number of bugs, development time, etc.).
 Why important: Helps in improving the process and ensuring the product meets goals.
 Example: Tracking how many defects are found per 1,000 lines of code.

🔹 6. Software Configuration Management (SCM)


 Purpose: To control changes in software during development.
 What happens: Tracks different versions of files, documents, and code.
 Why important: Ensures that changes are made in a controlled, traceable way.
 Example: Using Git to manage different versions of code and avoid conflicts.

🔹 7. Reusability Management
 Purpose: To make use of existing components in new projects.
 What happens: Identifies reusable parts and sets rules for using them.
 Why important: Saves time and ensures consistency.
 Example: Reusing a payment processing module in multiple apps.

🔹 8. Work Product Preparation and Production


 Purpose: To produce required project documents and artifacts.
 What happens: Prepare models, forms, logs, and documentation needed for the project.
 Why important: These are needed for understanding, tracking, and maintaining the
software.
 Example: Creating user manuals, test reports, design documents, etc.

2.Capability Maturity Model (CMM)


•Capability Maturity Model is a bench-mark for measuring the maturity of an organization’s
software process.
•It is a methodology used to develop and refine an organization’s software development process.
•CMM can be used to assess an organization against a scale of five process maturity levels based
on certain Key Process Areas (KPA).
•It describes the maturity of the company based upon the project the company is dealing with
and the clients.
•Each level ranks the organization according to its standardization of processes in the subject
area being assessed.
Maturity Level 1 – Initial: Company has no standard process for software development. Nor does
it have a project-tracking system that enables developers to predict costs or finish dates with any
accuracy.
Maturity Level 2 – Managed: Company has installed basic software management processes and
controls. But there is no consistency or coordination among different groups.
Maturity Level 3 – Defined: Company has pulled together a standard set of processes and
controls for the entire organization so that developers can move between projects more easily
and customers can begin to get consistency from different groups.
Maturity Level 4 – Quantitatively Managed: In addition to implementing standard processes,
company has installed systems to measure the quality of those processes across all projects.
Maturity Level 5 – Optimizing: Company has accomplished all of the above and can now begin
to see patterns in performance over time, so it can tweak its processes in order to improve
productivity and reduce defects in software development across the entire organization.

3.Incremental Process Model


The Incremental Process Model is a software development approach where the system is
designed, implemented, and tested in small parts or increments. Each increment adds a
functional portion to the final system, and the product grows with each iteration.
Instead of delivering the entire software at once (like in the Waterfall model), the software is
developed and delivered step-by-step.

🔁 How It Works – Phase by Phase:


1. Requirements Analysis
 Full system requirements are collected at the beginning.
 However, these are divided into small parts (increments) based on priority, risk, or
functionality.

2. System Design (for first increment)


 Design the architecture of the complete system.
 Then focus on designing just the part that will be developed in the first increment.

3. Implementation of First Increment


 The first increment (core features or high-priority functions) is developed.
 It’s tested and delivered to the user.

4. Feedback and Planning for Next Increment


 User feedback is gathered.
 Based on feedback, the next set of features is selected and implemented in the next
increment.
5. Repeat Steps 2–4
 The process continues with each increment adding new features.
 Integration and testing are done with each addition until the complete system is built.

4. Spiral Model
The Spiral Model is a Software Development Life Cycle
(SDLC) model that provides a systematic and iterative approach to
software development. In its diagrammatic representation, looks
like a spiral with many loops. The exact number of loops of the
spiral is unknown and can vary from project to project. Each loop of
the spiral is called a phase of the software development process.

Phases of the Spiral Model


The Spiral Model is a risk-driven model, meaning that the focus is
on managing risk through multiple iterations of the software
development process.
The Spiral Model is a risk-driven software development process that combines features of
both the Waterfall Model and prototyping. It was proposed by Barry Boehm in 1986.
This model divides the development process into phases (or loops), each of which represents a
development cycle. The spiral continues to expand with each iteration, gradually evolving the
product.
The Spiral Model is a software development process model that focuses on iterative
development and risk management. It was introduced by Barry Boehm in 1986 to combine
the best features of both waterfall and prototyping models.
Instead of following a strict, linear path like the Waterfall Model, or rapidly building versions
like Agile, the Spiral Model allows teams to develop software through repeated cycles,
addressing risks and feedback in each loop.
1. Communication
 This is the initial phase of each spiral.
 It involves gathering requirements by interacting with customers or stakeholders.
 Both functional and non-functional requirements are collected.
 Helps understand what the user wants before proceeding.
🔹 Example: “We need a shopping cart system with secure payment and user login.”

2. Planning
 In this phase, developers:
o Analyze risks (what can go wrong?)
o Prioritize features
  To understand what needs to be done
  To estimate the time, effort, and cost
  To identify potential risks
  To allocate resources (people, tools, technologies)
  To ensure all stakeholders agree on the project goals and scope
o Decide what to develop in this iteration.
Resources, time, costs, and schedules are estimated.
🔹 Example: “Let’s focus on building a secure login system this cycle, and identify potential
password storage risks.”

3. Modelling
 Also called design or prototyping.
 Involves:
o Creating design models or architecture.
o Sometimes also building prototypes to validate ideas.
 Helps in better understanding and handling of risks before actual development.
🔹 Example: Design the database for user accounts, and create a mock login page layout.

4. Construction
 This is the development and testing phase.
 The actual part of the software (based on the current spiral’s plan) is coded and tested.
 Ensures that small, working parts of the software are delivered in each cycle.
🔹 Example: Code the login module, test it for authentication, and fix issues.

5. Deployment
 The working software developed in the current iteration is deployed or demonstrated to
the user.
 Feedback is collected, which will guide the next loop.
 If the product is not yet final, the cycle continues with more features added in the next
spiral.

5. Agile Process Model


The Agile Model was primarily designed to help a project adapt
quickly to change requests. So, the main aim of the Agile model is
to facilitate quick project completion. To accomplish this task, it’s
important that agility is required. Agility is achieved by fitting the
process to the project and removing activities that may not be
essential for a specific project.
Also, anything that is a waste of time and effort is avoided. The
Agile Model refers to a group of development processes. These
processes share some basic characteristics but do have certain
subtle differences among themselves.
The Agile Process Model is a modern software development approach that emphasizes
flexibility, collaboration, customer feedback, and rapid delivery of functional software. Unlike
traditional models like the Waterfall model, Agile is iterative and incremental, allowing teams
to respond quickly to changing requirements.

🔁 1. Concept / Requirements Gathering


 Talk to the customer.
 Understand what they need from the software.
 List the core features and functions they expect.

📋 2. Planning
 Decide what to build first.
 Break the project into small parts called iterations or sprints (usually 1–4 weeks).
 Assign tasks to the team.

🧠 3. Design
 Make a simple and flexible design.
 Focus only on what is needed for the current sprint.
 No heavy or detailed documentation.
🛠 4. Development
 Developers start writing the code.
 Work is done in small chunks.
 Team members collaborate and make quick changes if needed.

🧪 5. Testing
 Test the code as soon as it’s written (often daily).
 Fix bugs quickly.
 Customers may also test and give feedback.

🔄 6. Deployment
 Deliver a working version of the software to the customer.
 Could be at the end of each sprint.
 The software should be usable even if it’s not fully finished.

💬 7. Feedback & Review


 Customer reviews the product and gives feedback.
 Team discusses what went well and what can be improved.
 Make changes in the next sprint based on feedback.
6.Explain Functional and Non Functional Requirements
What are Functional Requirements?
These are the requirements that the end user specifically demands
as basic facilities that the system should offer. All these
functionalities need to be necessarily incorporated into the system
as a part of the contract.
 These are represented or stated in the form of input to be
given to the system, the operation performed and the
output expected.
 They are the requirements stated by the user which one
can see directly in the final product, unlike the non-
functional requirements.
Examples:
 What are the features that we need to design for this
system?
 What are the edge cases we need to consider, if any, in our
design?
Functional requirements define what a system should do. They describe the specific behavior
or functions of the system — the services, tasks, or functions the software must perform.
📘 Examples:
 The system must allow users to log in using a username and password.
 The software should send a confirmation email after a user registers.
 A customer must be able to search for products by category or keyword.
 The ATM must allow users to withdraw cash, check balance, and print receipts.
🛠 Purpose:
 They are the core features and business logic.
 Used to design, develop, and test the system functionalities.
🔍 Sources:
 Collected through interaction with stakeholders, end-users, domain experts, and business
analysts.What are Non-Functional Requirements?
Non-Functional Requirements
These are the quality constraints that the system must satisfy
according to the project contract. The priority or extent to which
these factors are implemented varies from one project to another.
They are also called non-behavioral requirements. They deal with
issues like:
 Portability
 Security
 Maintainability
 Reliability
 Scalability
 Performance
 Reusability
 Flexibility
Examples:
 Each request should be processed with the minimum
latency?
 System should be highly valuable.
Non-functional requirements specify how the system performs a function. They define the
quality attributes, constraints, or standards the system must meet, rather than specific
behaviors.
📘 Examples:
 The system should load the dashboard page within 3 seconds. (Performance)
 The application should support up to 1 million users. (Scalability)
 Data should be encrypted during transmission. (Security)
 The system should be available 24/7. (Availability)
 The system should be easy to use with a simple UI. (Usability)
🛠 Purpose:
 Improve user satisfaction, system efficiency, and quality.
 Help guide architecture decisions and design trade-offs.
Examples of Functional and Non-functional Requirements
Let’s consider a couple of examples to illustrate both types of
requirements:
1. Online Banking System
1. Functional Requirements:
 Users should be able to log in with their username and
password.
 Users should be able to check their account balance.
 Users should receive notifications after making a
transaction.
2. Non-functional Requirements:
 The system should respond to user actions in less than 2
seconds.
 All transactions must be encrypted and comply with
industry security standards.
 The system should be able to handle 100 million users with
minimal downtime.
2. Food Delivery App
1. Functional Requirements
 Users can browse the menu and place an order.
 Users can make payments and track their orders in real
time.
2. Non-functional Requirements:
The app should load the restaurant menu in under 1
second.
 The system should support up to 50,000 concurrent orders
during peak hours.
The app
7.Requirements Engineering Process
• A systematic and strict approach to the definition, creation, and
verification of requirements for a software system is known as
requirements engineering.
• To guarantee the effective creation of a software product, the
requirements engineering process entails several tasks that help
in understanding, recording, and managing the demands of
stakeholders.
1. ✅ Feasibility Study
Objective:
To determine whether the proposed software project is practical and worthwhile from
technical, economic, and operational perspectives.
Explanation:
 Assesses whether the project can be completed with the given budget, time, and
resources.
 Identifies potential technical challenges, organizational constraints, or market risks.
 Includes:
o Technical feasibility: Can the system be built with current technologies?
o Economic feasibility: Will the cost-benefit ratio justify the investment?
o Legal and operational feasibility: Are there legal, ethical, or operational
restrictions?
Output:
A Feasibility Report recommending whether or not to proceed with development.

2. 🔍 Requirements Elicitation
Objective:
To collect functional and non-functional requirements from stakeholders.
Explanation:
 This is the information-gathering phase.
 Stakeholders may include users, customers, business analysts, developers, etc.
 Techniques include:
o Interviews
o Questionnaires
o Brainstorming sessions
o Observation
o Use Case and Scenario Analysis
 Focus is on understanding what the user wants and how the system should behave.
Output:
A set of raw user needs, constraints, and expectations.
3. 📋 Requirements Specification
Objective:
To formally document the gathered requirements in a structured, unambiguous way.
Explanation:
 Converts the raw requirements into a clear, well-defined Software Requirements
Specification (SRS) document.
 Includes:
o Functional requirements: Describes what the system should do.
o Non-functional requirements: Describes how the system should behave
(performance, security, usability, etc.).
o System models (like data flow diagrams, ER models, UML diagrams).
 Must follow standards such as IEEE 830 for clarity and consistency.
Output:
A formal SRS document approved by both development team and stakeholders.

4. ✔️Requirements Verification and Validation (V&V)


Objective:
To ensure the documented requirements are correct, complete, and acceptable to
stakeholders.
✅ Verification (Technical Check)
 Ensures the SRS is error-free, consistent, complete, and meets predefined quality
criteria.
 Uses peer reviews, traceability analysis, and static checks.
✅ Validation (Stakeholder Check)
 Confirms the requirements match what the user really needs.
 Uses prototyping, user walkthroughs, and scenario testing.
Output:
A set of verified and validated requirements, ready for design and implementation.

5. 🔄 Requirements Management
Objective:
To control changes to requirements and maintain their consistency over the project lifecycle.
Explanation:
 Requirements often change due to evolving business needs, market conditions, or user
feedback.
 This phase:
o Tracks each requirement’s status
o Manages versions and changes
o Maintains traceability links (from requirement to design, code, and test cases)
o Uses tools like JIRA, DOORS, or ReqView for managing updates
Output:
An up-to-date, controlled version of the SRS with all approved changes tracked.
-

8.SRS for Hospital Management System


1. Introduction
1.1 Purpose
The purpose of this document is to outline the requirements for the Hospital Management
System (HMS), which automates day-to-day operations such as patient registration, appointment
scheduling, billing, doctor management, and medical record tracking.
1.2 Document Conventions
 Headings are bold and numbered.
 Functional requirements are listed in bullet points.
 Terms like Admin, Doctor, Patient refer to user roles.
1.3 Scope of the Project
The HMS will be a web-based application accessible by hospital staff and doctors. It will allow
managing patient records, appointments, medical histories, billing, and inventory. The system
ensures fast retrieval of patient data and supports hospital administration and management tasks.

2. Overall Description
2.1 Product Perspective
The system is an independent application with a centralized database. It will interact with
external systems for lab results and payment gateways but will mostly function independently.
2.2 Product Functions
 Register and manage patient and staff records
 Schedule appointments for doctors
 Generate medical reports and bills
 Maintain inventory of medicines
 Provide admin access to manage system users
2.3 Operating Environment
 Web browser (Chrome, Firefox)
 Backend: Node.js / Java / Python
 Database: MySQL / PostgreSQL
 Operating System: Windows/Linux server
2.4 Design and Implementation Constraints
 The system must be developed using open-source technologies.
 It should follow data privacy regulations (HIPAA or similar).
 Support both desktop and mobile-friendly views.
2.5 Assumptions and Dependencies
 Users are expected to have basic computer literacy.
 The hospital will provide the necessary hardware infrastructure.
 Internet connection is required for remote access.

3. External Interface Requirements


3.1 User Interface
 Dashboard for each user role (Admin, Doctor, Receptionist)
 Forms for patient registration, billing, and appointments
 Reports section for admin to generate summaries
3.2 Hardware Interface
 Server with at least 8GB RAM and 500GB storage
 Barcode scanner (for patient IDs and billing)
3.3 Software Interface
 Integration with laboratory software (for test results)
 Interface with payment gateway APIs (e.g., Razorpay/PayPal)

4. System Features
 Add, edit, delete patient records
 Schedule and manage appointments
 Search and filter medical histories
 Generate billing invoices
 Admin panel for user and role management

5. Other Functional and Non-Functional Requirements


5.1 Performance Requirements
 System should handle at least 50 concurrent users
 Response time for any transaction should be < 2 seconds
5.2 Safety Requirements
 Auto-logout after 10 minutes of inactivity
 Regular backups of medical records
5.3 Security Requirements
 Role-based access control
 Data encryption for sensitive patient information
 Secure login with CAPTCHA and 2FA
5.4 Software Quality Attributes
 Usability: Easy to navigate and understand
 Reliability: 99.9% uptime
 Maintainability: Modular code for easy updates
 Scalability: Should support addition of new departments

9.COCOMO II Model (Constructive Cost Model II)


COCOMO II is an advanced version of the original COCOMO (Constructive Cost Model)
developed by Barry Boehm. It is used for estimating the cost, effort, and schedule when
planning new software development projects.

✅ Key Goals of COCOMO II


 Estimate the person-months required for software development.
 Predict the development time and resources.
 Handle modern software practices like reuse, object orientation, and incremental
development.
COCOMO II provides a quantitative framework for estimating software development based on
various project attributes. It takes into account modern software development practices, such as:
 Object-oriented programming
 Component-based development
 Incremental and iterative development
 Software reuse
The model uses mathematical formulas to estimate the required effort by considering:
 The estimated size of the software (typically measured in KSLOC – Thousands of Source
Lines of Code)
 Various cost drivers (like product complexity, team experience, tool support)
 Scale factors (which affect how project size impacts effort non-linearly)

Main Models in COCOMO II


COCOMO II has three sub-models, each suited for different development stages:
1. Application Composition Model
 Used in the early stages (when prototypes or high-level user interfaces are developed).
 Based on the number of Object Points (screens, reports, and modules).
Effort (Person-Months) = (NOP / Productivity Rate)
Where NOP is New Object Points.

2. Early Design Model


 Used when requirements are roughly known, but the architecture isn't fully defined.
 Based on Unadjusted Function Points and a small set of cost drivers.
Effort = A × Size × EAF
Where:
 A is a constant (usually 2.94),
 Size is in KSLOC (thousands of source lines of code),
 EAF = Effort Adjustment Factor (based on cost drivers like reliability, complexity, etc.)

3. Post-Architecture Model
 Used after the project's architecture is well defined.
 More detailed: uses 17 cost drivers and 5 scale factors.
Effort = A × Size^E × ∏(Cost Drivers)
Where:
 A = 2.94 (default),
 Size = estimated size in KSLOC,
 E = exponent derived from scale factors,
 Cost Drivers include attributes like required reliability, platform difficulty, personnel
capability, etc.

Key Features of COCOMO II


 Supports reuse and reengineering
 Can be applied to modern software environments (object-oriented, component-based)
 Adaptable to agile and iterative methodologies
 Can estimate both effort and schedule

Advantages
 Provides realistic cost estimates
 Scalable to different project sizes
 Supports early as well as late estimation

Disadvantages
 Requires historical data
 Complex and time-consuming
 Needs calibration for best accuracy
10.Software Design Patterns (10 Marks)
Definition
A software design pattern is a general, reusable solution to a common problem in software
design. It is not code itself but a template or guide for how to solve a problem that can occur in
many different situations. These patterns are well-tested and proven by software engineers over
time.

Importance of Design Patterns


 Help developers solve design problems efficiently.
 Provide a standard vocabulary to communicate complex solutions.
 Improve code reusability, maintainability, and scalability.
 Reduce the chances of errors by using proven solutions.
 Make the code easier for other developers to understand and extend.

Types of Design Patterns (GoF Classification)


Design patterns are usually divided into three main categories:
1. Creational Patterns
These deal with object creation mechanisms, trying to create objects in a manner suitable to the
situation.
 Examples:
o Singleton – Ensures only one object of a class is created.
o Factory Method – Creates objects without specifying the exact class.
o Builder – Constructs complex objects step by step.
2. Structural Patterns
These deal with the composition of classes and objects, ensuring that components can work
together smoothly.
 Examples:
o Adapter – Makes one interface compatible with another.
o Facade – Provides a simplified interface to a complex system.
o Decorator – Adds responsibilities to an object dynamically.
3. Behavioral Patterns
These focus on communication between objects, describing how they interact and distribute
responsibility.
 Examples:
o Observer – One object notifies others about changes.
o Strategy – Selects an algorithm at runtime.
o Command – Encapsulates a request as an object.
11.✳️Software Design Concepts
Software design concepts are the fundamental principles and guidelines that form the
foundation for building reliable, maintainable, and efficient software. These concepts help in
breaking down complex problems into manageable modules, improving code quality, and
ensuring that the software meets the user’s needs.
Below are the core software design concepts:
1. Abstraction
 Definition: Abstraction is the process of simplifying complex systems by focusing on
the essential details and hiding unnecessary information.
 In Software Design: In object-oriented design, abstraction is used to provide a simple
interface to a user while hiding the complexity of the underlying system.
 Example: A car's steering wheel is an abstraction of the complex machinery involved in
controlling the car. As a user, you only need to focus on the act of steering rather than
understanding how the car's mechanical systems work.

2. Information Hiding
 Definition: Information hiding is the principle of concealing the internal details of a
system or module to reduce complexity and avoid misuse.
 In Software Design: By hiding implementation details, you ensure that external
components interact with software through well-defined interfaces, reducing
dependencies between them.
 Example: In object-oriented programming, classes often have private variables and
public methods to access or modify them, so the internal state of an object is hidden
from the outside world.

3. Structure/Architecture
 Definition: This refers to the overall organization or layout of a system. Software
architecture defines how components interact, the data flow, and how the system is
structured at a high level.
 In Software Design: The architecture determines the system's scalability, security, and
performance. It involves decisions about patterns (like layered architecture, client-server
model, etc.).
 Example: A client-server architecture for a web application, where the front end (client)
sends requests to the back end (server) which processes and sends responses.

4. Modularity
 Definition: Modularity refers to dividing a software system into smaller, independent,
and interchangeable units (modules) that can be developed, tested, and maintained
separately.
 In Software Design: This allows teams to focus on specific components without
affecting the entire system. It also helps in reusability and maintainability of code.
 Example: In an online shopping application, you may have separate modules for user
authentication, product catalog, shopping cart, and payment processing.
5. Concurrency
 Definition: Concurrency refers to the ability of a system to perform multiple tasks
simultaneously (or seemingly so).
 In Software Design: Concurrency is crucial for performance, especially in applications
that handle numerous tasks at once (like web servers or gaming engines). It involves
techniques like multi-threading and parallel processing.
 Example: A web server can handle multiple client requests at the same time, improving
responsiveness and efficiency.

6. Verification
 Definition: Verification is the process of ensuring that the software behaves as expected
and meets the specified requirements.
 In Software Design: This involves activities like unit testing, integration testing, and
code reviews to check the correctness of the software and its components.
 Example: After developing a login module, you run tests to verify that it correctly
handles login attempts and returns appropriate error messages.

7. Aesthetics
 Definition: Aesthetics in software design refers to the appearance and user interface
(UI) aspects of the software. It focuses on visual appeal, usability, and user experience
(UX).
 In Software Design: Aesthetics are important because a good UI/UX ensures that users
can easily interact with the system and feel comfortable while using it.
 Example: A clean, intuitive, and well-organized design for an e-commerce website that
allows users to browse and make purchases easily.
12.Explain The Golden Rules for Interface design
The golden rules of User Interface Design were proposed by Theo Mandel and are aimed at
improving how users interact with software. These rules are essential for making interfaces easy,
effective, and enjoyable to use. Let’s explain each of the three golden rules in a simplified yet
detailed way without removing any of the original points, and adding a bit more depth:

1. Place the User in Control


This rule focuses on giving users the power to control how they interact with the system. It helps
users feel confident and in command of their actions.
Key principles:
 Avoid forcing users into unwanted actions: The interface should not trap users into
doing things in a particular way. They should have the freedom to choose how they
interact.
 Support flexible interaction: Users should be able to interact in multiple ways,
according to their comfort level.
 Allow interruption and undo: If users make a mistake or change their mind, they should
be able to stop the action or reverse it easily.
 Adapt to user skill level: As users become more skilled, the system should allow faster
or more advanced interactions, like shortcuts or custom settings.
 Hide complexity: Ordinary users don’t need to see the technical details. Keep the
interface clean and simple for them.
 Use direct manipulation: Let users interact with visual elements (like dragging and
dropping) rather than abstract commands. This feels more natural and intuitive.
Additional point: Giving feedback (like showing a progress bar) also makes users feel in control
because they know what's happening in the system.

2. Reduce the User’s Memory Load


This rule is about designing the interface so that users don’t have to remember too much
information while using it.
Key principles:
 Reduce short-term memory demands: Don’t make users remember long sequences,
steps, or complex data between screens.
 Set useful defaults: When the system fills in common options automatically, users don’t
need to think too hard.
 Use intuitive shortcuts: Shortcuts (like Ctrl+C to copy) should be easy to learn and
remember.
 Follow real-world metaphors: Use familiar visual designs (like a trash bin for delete) so
users can guess what things do without needing to learn from scratch.
 Progressive disclosure: Show only the necessary information first, and reveal more as
needed. This keeps the screen clean and helps users focus.
Extra explanation: Reducing memory load is especially important for new users or casual users
who might not remember how everything works. A well-designed UI reminds users rather than
making them recall everything.
3. Make the Interface Consistent
Consistency helps users feel comfortable, because they can predict how the interface will
behave.
Key principles:
 Keep the context meaningful: The design should help users understand what they are
doing and why. For example, labels and buttons should match the task at hand.
 Maintain consistency across platforms or apps: If a company has multiple apps, users
should see a similar layout, behavior, and design language in all of them.
 Respect user expectations: If users are used to a certain way of interaction, don’t change
it suddenly. Only make changes if they bring clear benefits and you explain them well.
Additional note: Consistency also includes things like using the same font, color scheme, and
terminology. This helps reduce confusion and learning effort.
13.RMMM Plan for online exam
Two Risks in an Online Examination System
1. Server Crash or Downtime during the Exam
If the server hosting the exam goes down, students may not be able to submit answers or
continue the test.
2. Cheating or Malpractice by Students
Students may use unfair means during the online exam, such as using mobile phones,
screen-sharing, or external help.

Now let’s choose the first risk (Server Crash or Downtime during the Exam) for Risk
Assessment and RMMM Plan.
Risk Parameter Details

Risk Name Server Crash or Downtime

Risk Type Technical

Probability
High (especially during high-traffic hours)
(Likelihood)

Severe (exam interrupted, loss of student


Impact
data)

Risk Exposure High (because both probability and impact


(RE) are high)

✅ Risk Title: Server Crash or Downtime During the Online


Examination
✅ Risk Description:
During an online examination, if the server hosting the exam portal crashes or experiences
downtime, it can lead to severe disruptions such as loss of student responses, inability to access
the exam, and unfair evaluation. This affects the credibility of the system and may lead to delays,
rescheduling, and administrative complications.

📋 RMMM Plan (Risk Mitigation, Monitoring, and


Management Plan)
This plan consists of three parts:
1. Risk Mitigation
Mitigation means steps to reduce the chances of the risk happening.
 ✅ Use cloud servers with auto-scaling to handle high traffic.
 ✅ Host exams on a reliable and secure platform with high uptime.
 ✅ Do load testing before the exam to check server capacity.
 ✅ Create backup servers that can switch on automatically if the main server fails.
 ✅ Divide exam schedules into multiple slots to reduce server load.

2. Risk Monitoring
Monitoring means keeping track of the system to detect any early signs of risk.
 🔍 Use real-time monitoring tools (like New Relic or AWS CloudWatch) to track server
performance.
 🔍 Set up alerts for CPU, memory, and bandwidth usage crossing certain limits.
 🔍 Monitor login traffic and server health continuously during the exam.

3. Risk Management (Contingency Plan)


Management means what to do if the risk actually occurs.
 🧯 Immediately switch to a backup server if the main one fails.
 🧯 Notify students through email or SMS about the issue and extend exam time.
 🧯 Log all user activity before the crash so students don’t lose answers.
 🧯 Reschedule the exam if downtime was long or data was lost.
 🧯 Have a technical support team ready to respond instantly during exam hours.

Risk
Categor Probabilit Impac
Risk ID Risk Description Exposure (RE
y y (P) t (I)
= P × I)

Server crash or
RISK_00
downtime during the Technical 0.8 10 8.0
1
exam

RISK_00 Unauthorized access or


Security 0.6 9 5.4
2 hacking

RISK_00 Internet connectivity


External 0.7 7 4.9
3 issues for students

RISK_00 System bugs causing Function


0.5 8 4.0
4 wrong evaluation al

User interface
RISK_00
confusion leading to Usability 0.4 6 2.4
5
errors
🔴 Top Risks in Online Examination Systems
1. Server Crash or Downtime
 If the main exam server fails, the exam may become inaccessible for all students, risking
data loss or delay.
2. Internet Connectivity Issues (Student Side)
 Students may lose connection during the exam due to weak internet, preventing
submission or real-time participation.
3. Unauthorized Access or Hacking
 Hackers or unauthorized users may gain access to exam data or question papers, risking
leaks or manipulation.
4. Question Paper Leak
 Internal staff or attackers could leak question sets before the exam, leading to unfair
practices and loss of trust.
5. Power Failure (Student Side)
 Students without backup power may lose access to the exam suddenly, causing stress and
unfair assessment.
6. Malfunction of Exam Software
 Bugs or errors in the exam software could freeze the system, cause incorrect grading, or
log out students unexpectedly.
7. Time Synchronization Errors
 System clock mismatches can start or end exams at the wrong times, giving unfair
advantage to some users.
8. Authentication Failure
 Biometric or password-based login may fail, preventing legitimate students from
accessing their exam.
9. Data Loss (Due to No Auto-Save)
 If the exam platform doesn’t auto-save answers periodically, students may lose all
progress due to disconnection.
10. Scalability Issues During Peak Load
 Large numbers of students accessing the system simultaneously may lead to slow
performance or crashes.
11. Lack of Proctoring or Cheating Detection
 Without AI-based or manual proctoring, students may resort to unethical practices
without being caught.
12. Miscommunication or Confusion Over Exam Instructions
 Poorly designed UI or unclear instructions may confuse students, leading to incorrect
responses or skipped questions.
14.SCM (Software Configuration Management) Process

Software Configuration Management (SCM) is a discipline of software


engineering that focuses on tracking, controlling, and managing
changes in software throughout its development lifecycle. It ensures that
changes are systematic, traceable, and do not affect the integrity of
the software system.

Whenever software is built, there is always scope for improvement


and those improvements bring picture changes. Changes may be
required to modify or update any existing solution or to create a
new solution for a problem. Requirements keep on changing daily
so we need to keep on upgrading our systems based on the current
requirements and needs to meet desired outputs. Changes should
be analyzed before they are made to the existing system, recorded
before they are implemented, reported to have details of before
and after, and controlled in a manner that will improve quality and
reduce error. This is where the need for System Configuration
Management comes. System Configuration Management
(SCM) is an arrangement of exercises that controls change by
recognizing the items for change, setting up connections between
those things, making/characterizing instruments for overseeing
diverse variants, controlling the changes being executed in the
current framework, inspecting and revealing/reporting on the
changes made. It is essential to control the changes because if the
changes are not checked legitimately then they may wind up
undermining a well-run programming. In this way, SCM is a
fundamental piece of all project management activities.
🔹 Key Activities of the SCM Process:

1. Configuration Identification:
This is the first and most critical activity in the SCM process. It involves identifying and defining
all the software items that need to be controlled and managed during the software development
lifecycle. These items are known as Configuration Items (CIs) and can include source code,
design documents, requirements documents, test scripts, user manuals, and executable files. Each
CI is assigned a unique identifier and version label to track it easily. This process also involves
defining relationships among the items and grouping them logically to form baselines. A baseline
represents a formally reviewed and agreed-upon version of a software product that serves as a
starting point for further development.
2. Change Control:
Once configuration items are identified, managing changes to them becomes essential. Change
Control is a structured process for proposing, evaluating, approving, and implementing changes
to configuration items. The process begins with the submission of a Change Request (CR), which
is reviewed by a Change Control Board (CCB). The CCB assesses the technical, financial, and
schedule impact of the requested change and decides whether to approve or reject it. Approved
changes are implemented in a controlled manner, ensuring that no unauthorized modifications
are made. This process ensures that changes are made deliberately and systematically, reducing
the risk of errors and inconsistencies.
3. Version Control:
Version Control, also known as Revision Control, is the process of managing multiple versions
of software configuration items. This is crucial in environments where multiple team members
are working on the same project, as it allows tracking of changes made over time. Each version
of a configuration item is labeled and stored so that previous versions can be retrieved if
necessary. Tools like Git, SVN, or Mercurial are commonly used for version control. They help
developers avoid conflicts, support collaborative development, and maintain a history of
changes. In case of bugs or failures, the system can roll back to a previous stable version.
4. Configuration Auditing:
Configuration Auditing is the process of verifying that configuration items conform to their
specifications and that changes have been made according to approved procedures. Audits are
carried out to ensure completeness, correctness, and compliance with organizational standards.
There are two types of configuration audits: functional audits (which verify if the item performs
its intended function) and physical audits (which check the physical presence and documentation
of the configuration item). These audits help maintain quality and integrity by identifying
unauthorized or incomplete changes and ensuring that all required documents are up to date and
properly maintained.
5. Configuration Status Reporting (Status Accounting):
This process involves recording and reporting information about the status of configuration items
throughout the software development lifecycle. It provides detailed records of what configuration
items exist, their versions, and the history of changes made to them. Status reports include data
such as who made a change, when it was made, what was changed, and why. This documentation
helps project managers, developers, and auditors understand the current state of the system and
make informed decisions. Configuration status reporting ensures transparency, traceability, and
accountability in the software development process.
✅ Software Configuration Management (SCM) Process – Full
Explanation for 10 Marks
SCM is a process used to manage changes in software during the development life cycle. It
ensures that all changes are made in a controlled way, without affecting the quality or stability of
the project. The five main activities of SCM are:

1. Identification
This step means giving a unique name or ID to every important file or item in the project so that
we can track it. These items can be source code files, documents, reports, or test plans. Once
identified, these items are grouped as configuration items. For example, a file like login.java
may be version 1.0, and if it changes, we label it as version 1.1. This helps in keeping a record of
what’s being developed and changed.

2. Change Control
Change Control is used when someone wants to change something in the software, like a bug fix
or a new feature. The person must raise a Change Request (CR). Then a team called the
Change Control Board (CCB) reviews whether the change is needed and safe. If approved, the
change is done carefully. This process ensures that no unauthorized or risky changes are made,
and everything stays organized.

3. Version Control
This is used when we need to keep track of different versions of software files. It helps when
multiple developers are working together. For example, if one developer works on version 1.0
and another works on version 2.0, version control tools like Git make sure their work doesn’t
conflict. You can go back to an older version anytime if something goes wrong. So, version
control keeps a history of changes and helps in team collaboration.

4. Configuration Auditing
After changes are made, we must check that everything is correct. This checking is called
auditing. It ensures the final files match the expected versions and no changes are missing or
unauthorized. There are two types:
 Functional audit – checks if the software works as expected.
 Physical audit – checks if all files, documents, and records are complete and properly
stored.

5. Reporting (Status Accounting)


This step means keeping records of everything: what files we have, their versions, who changed
them, why, and when. This helps the team stay informed and makes the development transparent.
Managers and developers can easily check the status of the project and all its parts. It also helps
during project reviews and audits.

15.Formal Technical Review (FTR) in Software


Engineering



Formal Technical Review (FTR) is a software quality control activity
performed by software engineers. It is an organized, methodical
procedure for assessing and raising the standard of any technical
paper, including software objects. Finding flaws, making sure
standards are followed, and improving the product or document
under review’s overall quality are the main objectives of a formal
technical review (FTR). Although FTRs are frequently utilized in
software development, other technical fields can also employ the
concept.
Objectives of formal technical review (FTR)
 Detect Identification: Identify defects in technical objects
by finding and fixing mistakes, inconsistencies, and
deviations.
 Quality Assurance: To ensure the software meets quality
standards and requirements.
 Risk Mitigation: To stop risks from getting worse,
proactively identify and manage possible threats.
 Knowledge Sharing: Encourage team members to work
together and build a common knowledge base.
 Consistency and Compliance: Verify that all procedures,
coding standards, and policies are followed.
 Learning and Training: Give team members the chance to
improve their abilities through learning opportunities.
In addition, the purpose of FTR is to enable junior engineers to
observe the analysis, design, coding, and testing approach more
closely. FTR also works to promote backup and continuity to become
familiar with parts of the software they might not have seen
otherwise. FTR is a class of reviews that include walkthroughs,
inspections, round-robin reviews, and other small-group technical
assessments of software. Each FTR is conducted as a meeting and is
considered successful only if it is properly planned, controlled, and
attended.
4. Activities Involved in FTR
FTR follows a step-by-step process:
i) Planning:
The moderator schedules the meeting, selects participants, and distributes the product (e.g.,
design or code) to be reviewed. Everyone gets time to prepare.
ii) Overview Meeting:
Sometimes held to explain complex components of the software so reviewers can understand it
before reviewing.
iii) Preparation:
Reviewers read the material independently and identify issues, errors, or questions.
iv) Review Meeting:
The actual FTR meeting is conducted. The author presents their work. Each reviewer discusses
their observations. The recorder notes down defects and suggestions. The moderator ensures the
discussion stays focused and respectful.
v) Rework:
The author revises the product to correct the issues identified during the review.
vi) Follow-up:
The moderator checks that all issues have been addressed properly. A summary report is created
for record-keeping.
16 Walkthrough
The walkthrough is a review meeting process but it is different from
the Inspection, as it does not involve any formal process i.e. it is a
nonformal process. Basically, the walkthrough [review meeting
process] is started by the Author of the code.
In the walkthrough, the code or document is read by the author,
and others who are present in the meeting can note down the
important points or can write notes on the defects and can give
suggestions about them. The walkthrough is an informal way of
testing, no formal authority is been involved in this testing.
As there is an informal way of testing involved so there is no need
for a moderator while performing a walkthrough. We can call a
walkthrough an open-ended discussion, it does not focus on the
documentation. Defect tracking is one of the challenging tasks in
the walkthrough.
Advantages and Objectives of Walkthrough:
Following are some of the objectives of the walkthrough.
 To detect defects in developed software products.
 To fully understand and learn the development of software
products.
 To properly explain and discuss the information present in
the document.
 To verify the validity of the proposed system.
 To give suggestions and report them appropriately with new
solutions and ideas.
 To provide an early “proof of concept”.
17. Difference Between White Box Testing and Black Box Testing
Aspect White Box Testing Black Box Testing

Testing based on internal logic Testing based on input-output


Definition
and structure of the code. without knowing internal code.

Requires knowledge of
Knowledge No knowledge of internal code
programming and internal
Required is required.
code.

Usually done by testers or end


Tester Usually done by developers.
users.

Focuses on how the software Focuses on what the software


Focus
works (implementation). does (functionality).

Covers requirements,
Test Covers paths, branches,
functionality, and user
Coverage conditions, and loops.
interface.

Techniques Code coverage, path testing, Equivalence partitioning,


Used loop testing, etc. boundary value analysis, etc.

Unit testing, integration System testing, acceptance


Examples
testing. testing.

Time More time-consuming as it Less time-consuming for large


Consumption requires code analysis. applications.

Harder to automate fully due Easier to automate using tools


Automation
to complexity. like Selenium, QTP, etc.

Error Detects hidden internal Detects incorrect or missing


Detection errors, logic flaws. functionality.

Aspect White Box Testing Black Box Testing

Test Tests every path, condition, Tests the features and functions
Coverage and line of the code. without looking inside the code.

18.Explain The following Testing strategies


Unit Testing
Unit testing is the first level of software testing where individual components or modules of a
software are tested in isolation. The main goal of unit testing is to ensure that each unit of the
software performs as expected. A unit is the smallest testable part of any software, such as a
function, method, or procedure. This type of testing is usually performed by developers during
the development phase. Since it focuses on individual units, it helps in identifying bugs early in
the development cycle. Unit tests are generally automated and written using frameworks like
JUnit for Java or unittest for Python.
Unit testing is the process of testing the smallest parts of your
code, like it is a method in which we verify the code’s correctness
by running one by one. It’s a key part of software
development that improves code quality by testing each unit in
isolation.
You write unit tests for these code units and run them
automatically every time you make changes. If a test fails, it helps
you quickly find and fix the issue. Unit testing promotes modular
code, ensures better test coverage, and saves time by allowing
developers to focus more on coding than manual testing.

Integration Testing
Integration testing comes after unit testing and focuses on testing the interaction between
multiple modules. In real-world applications, different modules often depend on each other to
function correctly. Integration testing ensures that these modules work together as intended. It
helps identify issues like incorrect data passing, interface mismatches, or inconsistent behavior
when modules interact. There are several approaches to integration testing such as top-down,
bottom-up, and big-bang. This type of testing is important because even if individual units work
perfectly, they may fail when integrated due to dependency or communication issues.
Integration testing is typically performed after Unit Testing and
before system testing. It helps to identify and resolve integration
issues early in the Development Cycle, reducing the risk of more
severe and costly problems later on. Integration testing is one of
the basic Type of Software Testing Integration testing is
important because it verifies that individual software modules or
components work together correctly as a whole system. This
ensures that the integrated software functions as intended and
helps identify any compatibility or communication issues between
different parts of the system. By detecting and resolving
integration problems early, integration testing contributes to the
overall reliability, performance, and quality of the software product.
Validation Testing
Validation testing is a type of software testing that ensures the product meets the business
requirements and expectations of the client. It checks whether the right product has been built.
Validation testing is typically done after integration testing and includes activities like
acceptance testing, system testing, and performance testing. This form of testing focuses on
validating the software against the requirements gathered during the planning and analysis phase.
It answers the question: “Did we build the right product?” This type of testing is crucial for
ensuring customer satisfaction and confirming that the final product will fulfill its intended use.

System Testing
System testing is a high-level test that evaluates the complete and integrated software system. It
is conducted after integration testing and involves testing the entire application as a whole to
ensure it meets the specified requirements. System testing checks both functional and non-
functional aspects of the application such as performance, security, usability, and reliability. It is
typically performed in an environment that closely mirrors the production environment. The
objective is to verify that the system behaves as expected in all possible scenarios. It is the final
step before the software is delivered to the customer.
 The goal of integration testing is to detect any irregularity
between the units that are integrated. System testing
detects defects within both the integrated units and the
whole system. The result of system testing is the observed
behavior of a component or a system when it is tested.
 System Testing is carried out on the whole system in the
context of either system requirement specifications or
functional requirement specifications or the context of both.
System testing tests the design and behavior of the system
and also the expectations of the customer.
 It is performed to test the system beyond the bounds
mentioned in the software requirements specification
(SRS). System Testing is performed by a testing team that
is independent of the development team and helps to test
the quality of the system impartial.
 It has both functional and non-functional testing. System
Testing is a black-box testing. System Testing is
performed after the integration testing and before the
acceptance testing.
 In System Testing The testers do not require more
knowledge of programming to carry out this testing.
 It will test the entire product or software so that we will
easily detect the errors or defects which cannot be
identified during the unit testing and integration testing.
 The testing environment is similar to that of the real time
production or business environment.
It checks the entire functionality of the system with different test
scripts and also it c

19. Reverse Engineering


Software Reverse Engineering is a process of recovering the design,
requirement specifications, and functions of a product from an
analysis of its code. Steps of Software Reverse Engineering:
1. Collection Information: This step focuses on collecting all
possible information (i.e., source design documents, etc.)
about the software.
2. Examining the Information: The information collected in
step-1 is studied so as to get familiar with the system.
3. Extracting the Structure: This step concerns identifying
program structure in the form of a structure chart where
each node corresponds to some routine.
4. Recording the Functionality: During this step processing
details of each module of the structure, charts are recorded
using structured language like decision table, etc.
5. Recording Data Flow: From the information extracted in
step-3 and step-4, a set of data flow diagrams is derived to
show the flow of data among the processes.
6. Recording Control Flow: The high-level control structure
of the software is recorded.
7. Review Extracted Design: The design document
extracted is reviewed several times to ensure consistency
and correctness. It also ensures that the design represents
the program.
8. Generate Documentation: Finally, in this step, the
complete documentation including SRS, design document,
history, overview, etc. is recorded for future use.

20.Software Re-engineering – Detailed Explanation


Software Re-engineering is the process of analyzing, redesigning, and rebuilding existing
software systems to improve their performance, maintainability, and adaptability. It is a cost-
effective alternative to completely rewriting legacy systems. Instead of discarding the old
system, re-engineering enhances it using modern practices and technologies. The process
involves several structured steps, which are explained below:
1. Inventory Analysis
Inventory analysis is the first step in software re-engineering. It involves identifying and
evaluating all existing software assets to decide which ones require re-engineering. The
evaluation is based on factors like the system's business value, technical quality, frequency of
changes, and maintenance effort. This step helps prioritize which systems are worth investing
time and resources in. Systems that are critical to operations but hard to maintain or upgrade are
selected for re-engineering.

2. Document Restructuring
In legacy systems, documentation is often outdated, inconsistent, or poorly organized. Document
restructuring focuses on improving the structure and readability of existing documentation.
Instead of creating new documents, this step involves updating outdated information,
reformatting content for clarity, correcting technical inaccuracies, and organizing the
documentation according to modern standards. This makes it easier for developers and
stakeholders to understand the system and supports later stages of re-engineering.

3. Reverse Engineering
Reverse engineering is the process of analyzing the existing software to understand its design,
structure, and functionality. It helps uncover the internal logic and architecture of the system,
especially when original design documents are missing or unclear. In this phase, diagrams such
as data flow diagrams, control flow graphs, and module interaction charts may be created.
Reverse engineering does not modify the software but prepares it for restructuring by providing a
clear view of its internal workings.

4. Code Restructuring
After understanding the system through reverse engineering, the next step is to restructure the
code. Code restructuring improves the readability, efficiency, and maintainability of the source
code without changing its external behavior. This may include refactoring duplicated code,
simplifying complex logic, renaming variables for clarity, and organizing modules more
effectively. Code restructuring reduces technical debt and makes future enhancements easier to
implement.
5. Data Restructuring
In older systems, data structures and storage methods are often outdated or inefficient. Data
restructuring focuses on improving the way data is organized, stored, and accessed. This step
may involve normalizing databases, updating file formats, correcting inconsistencies, and
removing redundancy. The goal is to make the data more consistent, reliable, and compatible
with modern applications. Improved data structures also support better performance and
scalability.
6. Forward Engineering
Forward engineering is the final stage where the improved system is rebuilt using modern tools,
technologies, and best practices. Based on the refined code, restructured data, and updated
documentation, the software is re-implemented to align with current standards. This may include
porting the system to a new platform, enhancing the user interface, or integrating it with other
systems. The end result is a more robust, efficient, and future-ready application.

21. 4 P's of Project Management


Project Management involves planning, organizing, and managing resources to achieve specific
goals. To handle any software or technical project successfully, four key elements—known as
the 4 P’s of Project Management—must be managed carefully. These are: People, Product,
Process, and Project.

1. People
People are the most important part of any project. They include everyone involved in the project
—such as project managers, developers, testers, designers, clients, and stakeholders. Effective
communication, teamwork, leadership, motivation, and conflict management are crucial for the
success of the project. If people are not managed well, even a well-planned project can fail.

2. Product
This refers to the actual outcome or deliverable of the project, like software, application, or
service. Before starting the project, it is important to understand what the product should do, who
the users are, and what features are needed. Clear product requirements help in planning better
and reduce chances of misunderstandings later in the project.

3. Process
The process is the set of steps or methods followed to complete the project. It includes planning,
development, testing, delivery, and maintenance. Choosing the right process model (such as
Waterfall, Agile, or Spiral) based on the project's size, complexity, and requirements is essential
for timely and efficient completion.

4. Project
The project refers to the actual work of managing tasks, timelines, budgets, and risks. It includes
scheduling work, assigning tasks to team members, tracking progress, and ensuring that the work
is done on time and within budget. Good project management ensures smooth progress and
reduces the risk of failure.
The last and final P in software project planning is Project. It can
also be considered as a blueprint of process. In this phase, the
project manager plays a critical role. They are responsible to guide
the team members to achieve the project’s target and objectives,
helping & assisting them with issues, checking on cost and budget,
and making sure that the project stays on track with the given
deadlines. requirements of cli
.

22. W5HH Principle Explained


The W5HH Principle is a project management framework introduced by
Barry Boehm. It helps in thorough planning and monitoring of a software
project by asking seven key questions. Each letter in W5HH stands for a
question that addresses an important aspect of project planning and control.

Here is a detailed explanation of the W5HH Principle:


The acronym W5HH stands for:
1. Why is the system being developed?
This question focuses on the purpose and goals of the project. It helps in understanding the
business need or problem the software aims to solve. Without a clear reason, the project can
lack direction.

2. What will be done?


It Specifies the task set required for the Project
This asks for the scope and major features of the system. It involves identifying requirements,
deliverables, and a clear description of what the software should do.

3. When will it be done?


This addresses the timeline and scheduling of the project. It includes setting deadlines,
milestones, and delivery dates to ensure the project is completed on time.

4. Who is responsible for what?


This defines the roles and responsibilities of each team member. It clarifies who does what,
which avoids confusion and improves coordination among the team.
5. Where are they organizationally located?
This question refers to the location and structure of the team. It helps in understanding how
teams are organized, especially in large or distributed projects (e.g., in different departments or
countries).

6. How will the job be done technically and managerially ?


This focuses on the process and methods to be used. It includes selecting the development
model (like Agile or Waterfall), tools, coding standards, and testing strategies to complete the
project.
7. How much of each resource is needed?
This addresses the resources and budget needed to finish the project. It includes estimating
costs, manpower, time, and tools required to achieve project goals.

23.Phases in the Project Life Cycle (10 Marks Answer)


The Project Life Cycle defines the sequence of phases that a project passes through from its
initiation to closure. It offers a structured approach to manage a project effectively and ensures
that all necessary tasks are completed in a logical and efficient manner. There are typically five
main phases in a project life cycle:
1. Project Initiation
This is the first phase where the project is officially started. In this stage, the main goal is to
define the purpose and objectives of the project. A feasibility study is often conducted to
evaluate whether the project is technically and economically viable. Stakeholders are identified,
and a project charter is created that outlines the goals, scope, budget estimates, and initial risks.
Approval is sought from decision-makers to proceed further. This phase essentially answers
whether the project should be undertaken and what value it will bring.
2. Project Planning
Once the project is approved, the next step is to create a detailed plan that will guide the
execution of the project. This phase involves defining the scope in detail, listing the tasks
required, setting a timeline, estimating costs, and identifying the resources needed. A Work
Breakdown Structure (WBS) is often prepared to divide the project into manageable sections.
Risk management plans, communication strategies, and quality assurance plans are also created
during this phase. The planning phase is crucial because it sets the foundation for the rest of the
project and ensures that the team knows what needs to be done and how to do it.
3. Project Execution
During this phase, the actual work of the project is carried out according to the plan. Resources
are allocated, team members are assigned tasks, and the development or construction work
begins. The project manager ensures that everyone is working according to the schedule, the
deliverables are being produced, and quality standards are being followed. This phase requires
strong coordination among team members and continuous communication with stakeholders.
Any issues or unexpected challenges are addressed promptly to keep the project on track.
4. Project Monitoring and Controlling
This phase happens alongside project execution and focuses on tracking the project’s
performance and progress. The main objective here is to ensure that the project stays on schedule
and within budget while meeting quality standards. Key performance indicators (KPIs) are
measured regularly, and corrective actions are taken if the project deviates from the plan. Scope
changes, budget adjustments, and risk management are handled during this phase. Monitoring
and controlling help in maintaining project alignment with its goals and avoiding delays or cost
overruns.
5. Project Closure
The final phase of the project life cycle is closure. Once all the project objectives have been
achieved, and the deliverables have been accepted by the client or stakeholders, the project is
officially closed. This phase includes final testing, documentation handover, releasing project
resources, and conducting a post-project review or lessons learned session. The goal is to
evaluate what went well, what could be improved in future projects, and to ensure that all
contractual obligations are fulfilled. A final report is usually submitted, and any remaining
administrative tasks are completed.

24.Project Scheduling, Steps and Techniques (10 Marks


Answer)
Project scheduling is the process of planning the order of tasks, estimating the time needed to
complete them, and organizing resources to ensure that a project is finished efficiently and on
time. It is a vital part of project management that ensures timely execution, proper use of
resources, and clear visibility of the work progress. To schedule a project effectively, managers
follow a series of structured steps and apply specific techniques. Below are the main steps
involved in project scheduling, followed by techniques such as CPM and PERT.

Project Scheduling Steps


1. Define Activities:
The first step in project scheduling is to identify and list all the activities or tasks that need to be
performed to complete the project. This is often done by breaking down the entire project into
smaller, manageable components using a tool called the Work Breakdown Structure (WBS).
Each activity should be clearly defined with its objectives and deliverables.
2. Sequence Activities:
After defining all the tasks, the next step is to determine the correct sequence in which they
should be performed. This involves identifying dependencies among tasks—i.e., which activities
must be completed before others can start. This step ensures that the workflow is logical and that
tasks are linked based on their relationships (e.g., Finish-to-Start or Start-to-Start).
3. Estimate Time:
Once the activities are sequenced, the next step is to estimate how much time each task will take.
This includes using expert judgment, historical data, and in uncertain situations, three-point
estimation methods. Accurate time estimation is important because it directly affects the total
duration of the project.
4. Develop Schedule:
In this final step, all the information gathered in the previous steps is used to create a project
schedule. This involves plotting the activities on a timeline, allocating resources, and identifying
the critical path—the longest path of dependent activities that determines the minimum time
required to complete the project. Tools like Gantt charts, CPM, and PERT charts are used to
visualize the schedule.
Techniques: CPM and PERT
Critical Path Method (CPM):
CPM is a scheduling technique used when the duration of activities is known with certainty. It
identifies the critical path—the longest path of dependent activities in the project network. This
path determines the minimum time required to complete the project. If any activity on the critical
path is delayed, the entire project gets delayed. CPM is most useful in projects like construction
or software development where task durations are predictable. It also helps in identifying slack
time and allocating resources efficiently.
Program Evaluation and Review Technique (PERT):
PERT is used when activity durations are uncertain. It uses three time estimates for each activity:
optimistic time (O), most likely time (M), and pessimistic time (P). These are combined to
calculate the expected time (TE) using the formula:
TE = (O + 4M + P) / 6
PERT helps in analyzing the probability of completing the project within a certain time frame. It
is especially useful in research and development projects where task durations are unpredictable.
Like CPM, PERT also uses a network diagram to represent tasks and their dependencies.

25. Definitions
a. Project:
A project is a task or set of tasks done to achieve a goal. It has a clear start and end, and it uses
time, money, and people to create something like a product or result.

b. Critical Path:
The critical path is the longest set of tasks in a project that must be finished on time for the whole
project to be completed on time. If any task on this path is delayed, the entire project will be
delayed.

c. Earned Value:
Earned value shows how much work has actually been done in a project compared to what was
planned. It helps check if the project is on track.
d. Process:
A process is a step-by-step way of doing work. In software, it means how we plan, build, test,
and fix the software.

e. Scope:
Scope means what a project will and won’t do. It lists the tasks, goals, and results the project is
expected to achieve.

Numerical
Assume that a system for simple students registration in a course is
planned to be developed and its estimated size is approximately 10,000 lines of code. The
organization is proposed to pay Rs. 25000/month to software engineers. Compute the
development
effort, development time.
To compute the development effort and development time for a student registration system with
10,000 lines of code, we use the Basic COCOMO model.

Given:
- Estimated size = 10,000 LOC = 10 KLOC
- Salary = Rs. 25,000/month per developer

Basic COCOMO Model for Organic Projects:


- Effort (E) = a × (KLOC)^b
- Development Time (D) = c × (Effort)^d

Where the constants for organic type are:


- a = 2.4, b = 1.05, c = 2.5, d = 0.38

Step 1: Calculate Effort


E = 2.4 × (10)^1.05
(10)^1.05 ≈ 11.22
E = 2.4 × 11.22 ≈ 26.93 person-months

Step 2: Calculate Development Time


D = 2.5 × (26.93)^0.38
(26.93)^0.38 ≈ 3.55
D = 2.5 × 3.55 ≈ 8.88 months

Final Answers:
- Effort ≈ 26.93 person-months
- Development Time ≈ 8.88 months

Optional: Total Cost = Effort × Salary


= 26.93 × 25,000 ≈ ₹6,73,250

You might also like