Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
33 views38 pages

Sadt Notes

Computer science

Uploaded by

pony20101998
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views38 pages

Sadt Notes

Computer science

Uploaded by

pony20101998
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

SADT

NOTES

1
Information System
An information system (IS) refers to a combination of people, processes, data, and
technology that work together to collect, store, process, analyze, and disseminate
information for supporting decision-making, coordination, control, and other
organizational activities. It encompasses the use of technology and resources to
manage and utilize information effectively within an organization.
Categories of Information Systems:-
1. Transaction Processing Systems (TPS): TPS capture and process transactional
data generated by daily business operations, such as sales, inventory, and payroll.
They focus on operational efficiency and data accuracy.

2. Management Information Systems (MIS): MIS provide reports and summaries


based on data from TPS. They support middle management by providing
structured information for decision-making and monitoring the performance of
departments or business processes.

3. Decision Support Systems (DSS): DSS assist in decision-making by providing


interactive tools and models for analyzing data and evaluating different scenarios.
They help managers make informed decisions by providing insights, predictions,
and recommendations.

4. Executive Information Systems (EIS): EIS are designed to support top-level


executives in strategic decision-making. They provide summarized and highly
aggregated information from various sources to aid strategic planning and
control.

5. Enterprise Resource Planning (ERP) Systems: ERP systems integrate various


business functions and processes, such as finance, human resources,
procurement, and manufacturing, into a single, cohesive system. They enable data
sharing, process automation, and cross-functional coordination.

6. Knowledge Management Systems (KMS): KMS capture, store, organize, and


distribute knowledge and expertise within an organization. They facilitate
knowledge sharing, collaboration, and learning to improve decision-making and
innovation.

2
Information Strategies:-
1. Information Capture Strategy: This strategy focuses on collecting accurate
and relevant data from various sources, ensuring data quality and integrity
through proper validation and verification processes.

2. Information Storage and Retrieval Strategy: It involves organizing and


storing data in a structured manner, implementing databases or data warehouses,
and developing efficient search and retrieval mechanisms to access information
when needed.

3. Information Analysis Strategy: This strategy focuses on transforming raw data


into meaningful information through data analysis techniques such as data
mining, statistical analysis, and visualization to identify patterns, trends, and
insights.

4. Information Distribution Strategy: This strategy involves determining how


and when to deliver information to the right people in the organization. It
includes establishing communication channels, designing reports and
dashboards, and implementing collaboration tools.

5. Information Security and Privacy Strategy: It encompasses measures and


policies to protect information from unauthorized access, ensure data
confidentiality, integrity, and availability, and comply with relevant regulations
and standards.

6. Information Governance Strategy: This strategy involves establishing


policies, procedures, and responsibilities for managing and governing
information assets, ensuring data quality, compliance, and alignment with
organizational goals.

3
SDLC (Software Development Life Cycle)
SDLC stands for Software Development Life Cycle. It is a structured approach or
process used by software development teams to plan, design, develop, test, deploy,
and maintain software systems. SDLC provides a framework for managing the
entire software development process, from the initial concept to the final product.
Stages:-
1. Requirements Gathering: The development team gathers and documents the
requirements for the software. They interact with stakeholders, including clients
and end-users, to understand their needs, expectations, and desired
functionalities.

2. Design: Once the requirements are gathered, the team proceeds to design the
software. This stage involves creating a blueprint or plan that outlines how the
software will be structured, including its architecture, user interface design, and
database design.

3. Development: The development stage is where the actual coding and


programming of the software take place. Developers use the design specifications
to write the code, implement the functionalities, and create the software
application.

4. Testing: In the testing stage, the developed software is thoroughly tested to


ensure it works correctly and meets the specified requirements. Different types of
testing, such as functional testing, performance testing, and security testing, are
conducted to identify and fix any bugs, errors, or issues.

5. Deployment: Once the software passes all the tests and is deemed ready for
release, it is deployed or installed in the production environment. This stage
involves setting up the necessary infrastructure, configuring the software, and
making it available for end-users to access and utilize.

6. Maintenance: After the software is deployed, the maintenance stage begins. It


involves monitoring the software's performance, addressing any user-reported
issues or bugs, and implementing updates, enhancements, or fixes as required.
Regular maintenance ensures the software remains functional, secure, and up-to-
date.
4
Throughout the SDLC, there is typically a continuous process of gathering
feedback, making improvements, and iterating on the software based on user input
and changing requirements.

System Analyst
A system analyst plays an important role in the development and implementation of
information systems within an organization. They bridge the gap between business
needs and technology by understanding the requirements of stakeholders and
translating them into functional solutions.
Role and Responsibilities of a System Analyst:-
1. Requirements Gathering: Systems analysts interact with stakeholders, such as
users, managers, and executives, to understand their needs and objectives. They
collect and analyze information about existing processes, identify main points,
and determine the requirements for new or improved systems.

2. Systems Design: Based on the gathered requirements, systems analysts create


system design specifications. They define the structure, functionalities, and
interactions of the proposed information system. This includes creating
diagrams, flowcharts, and documentation to illustrate the system's architecture
and logic.

3. Collaboration with Development Teams: Systems analysts collaborate with


software developers, programmers, and other technical teams to ensure that the
system design is implemented accurately. They communicate requirements,
provide clarifications, and assist in the development process by offering insights
and guidance.

4. Testing and Validation: Systems analysts participate in the testing phase to


verify that the developed system meets the requirements and functions as
intended. They help identify and resolve any issues or deviations from the
desired outcome.

5. User Support and Training: After the system is implemented, systems analysts
provide support to end-users by assisting with troubleshooting, addressing user
queries, and providing training to ensure smooth adoption and efficient
utilization of the system.
5
Requirement gathering techniques
Requirement gathering techniques are methods used by software development
teams to gather information about the desired functionality, features, and constraints
of a software system from stakeholders and end-users or customers.
There are three common requirement gathering techniques:
 Interviews: Interviews involve direct interaction between the project team and
stakeholders or end-users. In this technique, team members ask specific questions
to gather detailed information about the requirements. Interviews can be
conducted individually or in a group setting. It allows for in-depth discussions,
clarification of requirements, and the opportunity to explore additional details
based on the responses provided.

 Questionnaires: Questionnaires are a written set of questions that are distributed


to stakeholders or end-users to gather their requirements and feedback. This
technique allows for collecting standardized responses from a large number of
individuals. Questionnaires can be distributed electronically or in print format.
They provide an efficient way to collect information from geographically
dispersed individuals and allow for quantitative analysis of the responses.

 On-site Observation: On-site observation involves the project team observing


end-users or stakeholders in their work environment to understand their
processes, tasks, and needs. By directly observing users performing their
activities, the team can gain valuable insights into the system's requirements. This
technique helps uncover indirect requirements that may not be perfectly
communicated in interviews or questionnaires. It also provides an opportunity to
identify pain points, inefficiencies, and areas for improvement.

System Requirements Specification (SRS)


System Requirements Specification (SRS) is a document that captures and describes
the detailed requirements of a system or software application. It serves as a
foundation for system development, design, and testing.
Some points that are typically included in a SRS:-
1. Introduction:-

6
 Provide an overview of the system and its purpose.
 Describe the key objectives and goals of the system.

2. Functional Requirements:-

 Define the specific features and functionalities that the system should possess.
 Clearly state the expected behaviour and actions of the system.
 Specify any input data required and the corresponding output results.

3. Non-functional Requirements:-

 Outline the qualities and constraints that the system should meet.
 Include performance requirements (response time, scalability), usability,
reliability, security, and compatibility with other systems or platforms.

4. User Interface:

 Describe how the user interface should look and feel.


 Specify the layout, navigation, and interactions within the system.

5. System Interfaces:-

 Identify any external systems, databases, or APIs that the system needs to interact
with.
 Define the methods, protocols, and formats for data exchange.

6. Data Requirements:-

 Define the data entities and their attributes that the system will handle.
 Specify any data validation, storage, or retrieval requirements.

7. Operational Requirements:-

 Outline the system's operational aspects, such as system deployment, installation,


and maintenance requirements.
 Specify any backup, recovery, or disaster management procedures.

8. Documentation Requirements:-
 Documentation standards and guidelines.
7
 User manuals, technical guides, or other documentation to be produced.

9. Project Constraints:-
 Timeframe and deadlines for system development and implementation.
 Budget constraints or cost considerations.
 Resource limitations (personnel, infrastructure, etc.)

Feasibility Study
A feasibility study is an assessment conducted to determine whether a proposed
project, venture, or idea is viable, practical, and worth pursuing. It helps decision-
makers make informed choices by considering the advantages, disadvantages, risks,
and costs associated with a particular project.
1. Technical Feasibility:-
Technical feasibility assesses whether the proposed project can be developed or
implemented using the available technology, infrastructure, and resources. It
examines factors such as software and hardware requirements, technical expertise,
and compatibility with existing systems. This study helps determine if the project is
technically achievable within the given constraints.
2. Economic Feasibility:-
Economic feasibility evaluates the financial viability of the project. It analyzes the
costs and benefits associated with the project, including investment requirements,
operating expenses, potential revenue or savings, and return on investment (ROI).
The study aims to determine if the project is financially viable and if it can generate
sufficient profits or cost savings to justify the investment.
3. Operational Feasibility:-
Operational feasibility focuses on assessing whether the proposed project aligns
with the organization's existing operations, processes, and resources. It examines
factors such as staffing requirements, skill sets, training needs, and potential impact
on day-to-day operations. This study helps determine if the project can be smoothly
integrated into the organization's existing structure and if it will be accepted and
supported by the people involved.
4. Schedule Feasibility:-

8
It evaluates the project timeline and determines whether the proposed project can
be completed within the desired time frame. It considers factors such as resource
availability, project dependencies, critical path analysis, and potential risks that
could affect the project schedule. The study helps in identifying any scheduling
constraints or challenges that may impact the project's successful implementation.

Cost Benefit Analysis


Cost benefit analysis plays a crucial role in the Software Development Life Cycle
(SDLC) by helping organizations make informed decisions about their software
projects.
 Project Feasibility: Before starting a software project, a cost/benefit analysis
helps determine if the project is financially viable. It assesses whether the
potential benefits of the project outweigh the costs and risks involved.

 Resource Allocation: Cost/benefit analysis helps allocate resources effectively.


It ensures that the organization invests its time, money, and effort in projects that
offer the best return on investment (ROI).

 Risk Assessment: It allows organizations to identify potential risks and


uncertainties associated with the project. By understanding the potential costs
and benefits, they can make better risk management decisions.

 Decision Making: A cost/benefit analysis provides a solid foundation for


decision-making during different phases of the SDLC. It helps stakeholders
prioritize features, functionalities, and improvements based on their estimated
value.

 Project Success Evaluation: After completing a software project, cost/benefit


analysis helps evaluate its success. By comparing the actual benefits achieved
against the estimated costs, stakeholders can determine if the project met its
objectives.

 Scope Control: A cost/benefit analysis helps set realistic project scope by


considering the costs involved in implementing additional features or
functionalities and balancing them against potential benefits.

9
 Change Management: During the development process, changes may arise.
Cost/benefit analysis helps assess the impact of proposed changes to the project's
scope, schedule, and budget.

 Continuous Improvement: Conducting cost/benefit analysis for completed


projects provides valuable insights for future projects. It helps organizations
learn from their experiences and improve their decision-making processes.

Cohesion and Coupling


 Cohesion:-
It refers to how closely the elements (functions, modules, or classes) within a
software component are related and focused on a single task or purpose.
High cohesion means that the elements within a component are tightly related and
work together towards a specific goal. High cohesion is desirable because it leads
to more organized and easier-to-understand code, making it simpler to maintain and
modify.
Low cohesion means that the elements are loosely related and may perform
multiple, unrelated tasks.

 Coupling:-
It refers to the level of dependency between different software components or
modules. It measures how much one component relies on another.
Low coupling means that the components are relatively independent of each other,
while high coupling means that they are tightly interconnected and heavily rely on
one another.
Low coupling is desirable because it reduces the ripple effects of changes made to
one component, making the system more flexible and easier to modify without
affecting other parts.

In software design, the goal is to achieve high cohesion within individual


components and low coupling between different components, as it leads to more
modular, understandable, and maintainable codebases.

10
Entity Relationship Model
ER model stands for an Entity-Relationship model. It is a high-level data model.
This model is used to define the data elements and relationship for a specified
system. It develops a conceptual design for the database. It also develops a very
simple and easy to design view of data. It uses entities, attributes, and relationships
to describe the structure and behaviour of a database system.

The main components of an ER model:-


1. Entity: An entity represents a real-world object such as a person, place, or thing.
An entity can be represented as rectangles.
Example  In an organization, manager, product, employee, department etc.

 Weak Entity: An entity that depends on another entity called a weak entity. The
weak entity doesn't contain any key attribute of its own. The weak entity is
represented by a double rectangle.

11
 Strong Entity: An entity that exists independently and has its own unique key
attribute. It can exist on its own without being dependent on any other entity.

2. Attribute: An attribute describes the property of an entity. It provides additional


information about the entity. Eclipse is used to represent an attribute.

Example  id, age, contact number, name, etc. can be attributes of a student.

 Key Attribute:-

The key attribute is used to represent the main characteristics of an entity. It


represents a primary key. The key attribute is represented by an ellipse with the
text underlined.

 Composite Attribute:-
An attribute that composed of many other attributes is known as a composite
attribute. The composite attribute is represented by an ellipse, and those ellipses are
connected with an ellipse.

12
 Multivalued Attribute:-
An attribute can have more than one value. These attributes are known as a
multivalued attribute. The double oval is used to represent multivalued
attribute.
Example  a student can have more than one phone number.

 Derived Attribute:-
An attribute that can be derived from other attribute is known as a derived attribute.
It can be represented by a dashed ellipse.
Example  A person's age changes over time and can be derived from another
attribute like Date of birth.

3. Relationship: A relationship is used to describe the relation between entities.


Diamond or Rhombus is used to represent the relationship. It represents how
entities are connected or interact with each other.

Example  A student can enroll in multiple courses, creating a "enrollment"


relationship between the student and the course entities.

Types of Relationships in an ER model:-


1. One-to-One (1:1): Each entity in one set is associated with exactly one entity in
the other set.
13
Example A female can marry to one male, and a male can marry to one female.

2. One-to-Many: An entity from one set can be associated with multiple entities in
the other set, but an entity from the other set can be associated with only one
entity in the first set.
Example  Scientist can invent many inventions, but the invention is done by the
only specific scientist.

3. Many-to-Many: This relationship allows multiple entities from both sets to be


associated with each other.
Example  Employee can assign by many projects and project can have many
employees.

4. Many-to-One: When more than one instance of the entity on the left, and only
one instance of an entity on the right associates with the relationship.
Example  Student enrolls for only one course, but a course can have many
students.

Cardinality and Participation


 Cardinality describes the number of instances that can be associated between two
related entities, such as one-to-one, one-to-many, many-to-one, or many-to-
many relationships.

14
 Participation describes whether a record in one table must participate in a
relationship with a related record in another table, either mandatorily or
optionally.
OOM (Object Oriented Modeling)
Object-oriented modeling is a way of designing and representing systems or
problems using objects and their interactions. It is a popular approach in software
development and other fields where complex systems need to be analyzed and
designed.
We break down a system into individual objects, which are representations of real-
world or abstract entities. Each object has its own set of properties (attributes) and
behaviors (methods). Objects interact with each other by sending messages, which
trigger specific behaviors or actions.
Some important types used in OOM:
1. Class: A class is a blueprint or template that defines the attributes and methods
that objects of the class will possess. It represents a category of objects with
similar properties and behaviors.

2. Object: An object is an instance of a class. It represents a specific occurrence or


realization of a concept defined by its class. Objects have unique state (attribute
values) and behavior (method implementations).

3. Inheritance: Inheritance allows classes to inherit properties and behaviors from


other classes, forming a hierarchical relationship. A class that inherits from
another class is called a subclass or derived class, while the class being inherited
from is called the superclass or base class. Inheritance facilitates code reuse and
promotes a hierarchical organization of classes.

4. Polymorphism: Polymorphism allows objects of different classes to be treated


as objects of a common superclass. It enables methods to be implemented in
different ways in different classes while maintaining a consistent interface.
Polymorphism promotes flexibility and extensibility in the system.

5. Encapsulation: Encapsulation is the principle of bundling data and methods


within an object, hiding the internal details and providing a well-defined interface
for interaction. It ensures that objects maintain their integrity and can only be
accessed through specified methods.
15
6. Abstraction: Abstraction involves focusing on essential properties and behaviors
while ignoring unnecessary details. It allows the modeling of complex systems
by creating simplified representations. Abstract classes and interfaces are used to
define common characteristics and behaviors shared by multiple classes.

Static and Dynamic Modeling


 Static modelling:-
It focuses on the structure and organization of the system. It involves identifying
and representing the static elements of a system, such as classes, attributes,
relationships, and constraints. It helps in understanding the system's structure, the
entities involved, and how they relate to each other. (It deals with the structure
and static elements of a system.)
Example  In a banking system, static modeling would identify classes like
"Customer," "Account," and "Transaction," along with their attributes and
relationships.

 Dynamic modelling:-
It focuses on the behavior and functionality of the system. It captures the system's
dynamic aspects, such as the sequence of events, interactions, and state transitions
that occur over time. Dynamic models depict the system's behavior in response to
various events or actions. It helps in understanding how the system functions, the
flow of information, and the processes involved. (It focuses on the system's
behavior and how it responds to events or actions.)
Example  In a banking system, dynamic modeling would illustrate how a
customer interacts with the system to perform actions like depositing money,
withdrawing funds, or transferring funds between accounts.

Modeling using UML


UML stands for Unified Modeling Language. It is a standardized visual modeling
language used in software engineering and other fields to model, design, and
document systems. UML provides a set of graphical notations and conventions that
allow stakeholders, such as analysts, designers, and developers, to communicate and
understand system requirements, architecture, and behavior.

16
There are various UML Diagrams:-
1. Use case Diagram
2. Class Diagram
3. Sequence Diagram
4. Activity Diagram
5. State Diagram
6. Object Diagram
7. Package Diagram
8. Deployment Diagram
9. Interaction Diagram

Software Testing
Software testing is the process of evaluating a software application or system to
ensure that it meets its intended requirements and functions correctly. By
identifying and fixing defects early on, it reduces the likelihood of issues arising in
production, thereby improving the quality, reliability, and usability of the software.
The objective is to identify and fix any bugs or problems before the software is
deployed or made available to users.
There are two types of software testing:-
1. Manual Testing
2. Automation Testing

 Manual Testing:- Testing any software or an application according to the


client's needs without using any automation tool is known as manual testing. We
can say that it is a procedure of verification and validation.
 Automation Testing:- Automation testing is the best way to enhance the
efficiency, productivity, and coverage of Software testing. It is used to re-run the
test scenarios, which were executed manually, quickly, and repeatedly. It uses
specific tools to automate manual design test cases without any human
interference.

There are three types of manual testing:-


s

1. Black Box Testing:-

17
It is software testing technique where the tester examines the software application
without having knowledge of its internal structure, implementation details, or code.
It focuses on testing the functionality of the software based on the specified
requirements, without considering how the software achieves its results. It is also
known as behavioural testing and functional testing.

2. White Box Testing:-


It is a software testing technique where the tester has full knowledge of the internal
structure, implementation, and code of the software application being tested. It
focuses on testing the internal logic, flow, and structure of the software. It is also
known as structural testing, glass box testing, clear box testing and transparent
box testing.

3. Grey Box Testing:-


Grey box testing is a software testing technique that combines elements of both
black box and white box testing. The tester has partial knowledge of the internal
structure and code of the software application. It involves testing the software with
some understanding of its internal workings, but not full access or knowledge. It is
known as close box testing.
There are three types of Black-Box testing:-
1. Functional Testing
2. Non – Functional Testing or Performance Testing
3. Maintenance Testing

 Functional Testing:-
It focuses on verifying that the software application or system functions correctly
and performs its intended tasks according to the specified requirements. It ensures
that the software meets the functional aspects or features expected by the users.
Functional Testing Techniques:-
1. Unit Testing:-
It is a type of functional testing that focuses on testing individual units or
components of software in isolation. The goal of unit testing is to verify the
correctness of each unit and ensure that it functions as intended.

18
 Purpose: It aims to test the smallest testable parts of the software, such as
functions, methods, or classes. It helps identify defects or errors in the code at
an early stage.

 Isolation: Unit tests are designed to be independent and isolated from other units
and external dependencies.

 Automation: Unit testing is usually automated, which means that test cases are
written as code and executed automatically.

 Test Coverage: By testing individual units extensively, developers can gain


confidence in the correctness of their code and reduce the likelihood of bugs.

2. Integration Testing:-
It is a software testing technique that focuses on testing the integration and
interaction between different components or modules of a software system. It aims
to identify any defects or issues that may arise when these components are combined
and working together as a whole.
 Verifying Interactions: It ensures that the components or modules of the
software interact correctly with each other.

 Detecting Interface Issues: It helps identify any problems related to the


interfaces between the components. This includes issues such as incompatible
data formats, incorrect parameter passing, or communication errors.

 Testing Component Dependencies: It validates that the components handle


dependencies correctly.

 Overall System Functionality: It aims to verify that the integrated system


meets the defined requirements and performs its intended functions.

3. User Acceptance Testing:-


It is a type of functional testing that focuses on validating the software from the end-
user's perspective. Its primary goal is to determine if the software meets the user's
requirements, expectations, and business needs.

19
 Alpha Testing: It is conducted within the development team or a dedicated
group of testers. It is performed before the software is released to external users
or customers. The main goal is to identify and resolve any major issues, validate
the software against the intended requirements, and gather feedback for
improvements.

 Beta Testing: It involves releasing the software to a limited group of external


users or customers. These users represent the target audience and provide real-
world usage feedback. The main goal is to gather user feedback, identify any
remaining defects or usability issues, and gain insights into the software's
performance in diverse environments.

 Non-Functional Testing:-

It is a type of software testing to test non-functional parameters such as reliability,


load test, performance and stress of the software. The primary purpose of this testing
is to test the reading speed of the software system as per non-functional parameters.
The parameters of non-functional testing are never tested before the functional
testing.

Non-Functional Testing Techniques:-

1. Performance Testing: Performance Testing eliminates the reason behind the


slow and limited performance of the software. Reading speed of the software
should be as fast as possible. For Performance Testing, a well-structured and clear
specification about expected speed must be defined. Otherwise, the outcome of
the test (Success or Failure) will not be obvious.

2. Security Testing: Security testing checks the software for vulnerabilities and
potential risks to ensure that sensitive data is protected and unauthorized access
is prevented.

3. Load Testing: Load testing involves testing the system's loading capacity.
Loading capacity means more and more people can work on the system
simultaneously. During load testing, various metrics are measured, such as
response time, throughput and resource utilization.

20
4. Stress Testing: Stress testing is like pushing the software to its limits to see how
it handles intense or unusual conditions. It involves simulating heavy workloads,
large data volumes, or excessive user activity to measure how the system
performs and whether it can handle the stress without crashing or exhibiting
undesirable behaviour.

5. Compatibility Testing: Compatibility testing ensures that the software works


correctly across different platforms, operating systems, browsers, and devices. It
verifies that the software functions properly in various environments and
configurations, providing a consistent experience for users.

 Maintenance Testing:-
Maintenance testing is a type of software testing that is performed after a software
application or system has been deployed or released into production. It focuses on
verifying and validating the software's behaviour, functionality, and performance
after it has undergone changes, updates, or maintenance activities.
When software undergoes maintenance, such as bug fixes, enhancements, or
updates, there is a need to ensure that the changes made do not introduce new issues
or regressions. Maintenance testing helps in validating the correctness and stability
of the software after these modifications.

1. Regression Testing: Regression testing is a black box testing techniques. It is


used to authenticate a code change in the software does not impact the existing
functionality of the product. Regression testing is making sure that the product
works fine with new functionality, bug fixes, or any change in the existing
feature. It is typically executed after bug fixes, enhancements, or changes to
ensure the overall stability and functionality of the software are maintained.

2. Re-testing: The process of verifying that a specific defect or issue reported


during testing has been resolved or fixed properly. It involves running the
specific test cases that initially exposed the defect to confirm that the issue
no longer exists. It is usually performed after the development team claims
to have fixed a reported defect. The focus is on the specific defect or issue
that was reported, rather than the entire system or a broader set of
functionalities.
21
Conversion Method in Implementation and Maintenance
Conversion methods refer to the various approaches used to migrate or convert
existing systems, data, or processes to new technologies or platforms. These
methods are chosen based on the specific requirements, complexity, and constraints
of the conversion project.
 Big Bang Conversion: In this approach, the entire system is migrated to the new
technology or platform all at once.
Advantages:-
1. Quick implementation: The conversion process is completed in a short
timeframe.

2. Reduced maintenance efforts: Once the migration is done, there is no need


to maintain multiple systems or interfaces.

3. Immediate benefits: Users can start leveraging the advantages of the new
technology immediately after the conversion.

 Phased Conversion: In this approach, the conversion process is divided into


multiple phases, and each phase is implemented gradually.
Advantages:-
1. Risk mitigation: By implementing the conversion in phases, potential risks
and issues can be identified and addressed early in the process.

2. Incremental learning: Users can gradually adapt to the new system, reducing
the impact of a sudden change.

3. Flexibility: Each phase can be adjusted based on feedback and evolving


requirements.

 Parallel Conversion: In this approach, the new system is implemented


alongside the existing system, and both systems run in parallel for a certain
period.
Advantages:-

22
1. Reduced downtime: The existing system remains operational during the
parallel run, minimizing business disruptions.

2. Validation and comparison: Running both systems simultaneously allows


for data validation and a direct comparison between the old and new systems.

3. Smooth transition: Users can gradually shift to the new system, ensuring a
smoother transition and minimizing resistance to change.

 Pilot Conversion: In this approach, a small subset of the system or a specific


department/group is chosen for the initial conversion.
Advantages:-
1. Controlled implementation: The pilot phase allows for a controlled
environment to test and fine-tune the new system before full-scale
deployment.

2. Early feedback: Users involved in the pilot phase can provide feedback,
allowing for improvements and adjustments before wider adoption.

3. Risk mitigation: Any issues or challenges encountered during the pilot phase
can be addressed before rolling out the conversion to the entire organization.

Verification and Validation

Verification Validation

1. It includes checking documents, A. It includes testing and validating the


design, codes and programs. actual product.

2. Verification is the static testing. B. Validation is the dynamic testing.

3. It does not include the execution


C. It includes the execution of the code.
of the code.

4. Methods used in verification are D. Methods used in validation are Black


reviews, walkthroughs, Box Testing, White Box Testing and
inspections and desk-checking. non-functional testing.

23
Verification Validation

5. The goal of verification is


E. The goal of validation is an actual
application and software
product.
architecture and specification.

6. Quality assurance team does F. Validation is executed on software


verification. code with the help of testing team.

7. It comes before validation. G. It comes after verification.

8. It consists of checking of
H. It consists of execution of program
documents/files and is performed
and is performed by computer.
by human.

9. After a valid and complete I. Validation begins as soon as project


specification the verification starts. starts.

10. Verification is for prevention of


J. Validation is for detection of errors.
errors.

11. Verification is also termed as white K. Validation can be termed as black


box testing or static testing as work box testing or dynamic testing as
product goes through reviews. work product is executed.

12. Verification finds about 50 to 60% L. Validation finds about 20 to 30% of


of the defects. the defects.

13. Verification is based on the


M. Validation is based on the fact and
opinion of reviewer and may
is often stable.
change from person to person.

14. Verification is about process,


N. Validation is about the product.
standard and guideline.

24
Prototype Model
A prototype model is a software development approach that involves creating an
early, simplified version of a product to gather feedback, test concepts, and validate
requirements. Prototyping Model is used when the customers do not know the exact
project requirements beforehand. In this model, a prototype of the end product is
first developed, tested and refined as per customer feedback repeatedly till a final
acceptable prototype is achieved which forms the basis for developing the final
product.

 Advantages:-
1. This model is flexible in design.
2. It is easy to detect errors.
3. We can find missing functionality easily.
4. It can be reused by the developer for more complicated projects in the future.
5. It ensures a greater level of customer satisfaction and comfort.
6. It is ideal for online system.
7. It helps developers and users both understand the system better.
8. It can actively involve users in the development phase.

 Disadvantages:-
1. This model is costly.
2. It has poor documentation because of continuously changing customer
requirements.
3. There may be too much variation in requirements.
4. Customers sometimes demand the actual product to be delivered soon after seeing
an early prototype.
25
5. There may be sub-optimal solutions because of developers in a hurry to build
prototypes.
6. Customers may not be satisfied or interested in the product after seeing the initial
prototype.
7. There is certainty in determining the number of iterations.
8. There may increase the complexity of the system.

JAD (Joint Application Development) Model


Joint Application Development (JAD) is a collaborative approach used in the
software development process to gather requirements and design solutions. It brings
together stakeholders, end-users, and development teams to work jointly on the
project. This model was designed and put forward by Dr. Chuck Morris and
Dr. Tony Crawford of IBM, who propose this model in the late 1970s .
Phases of JAD Model:-
 Collaborative Workshops: JAD involves holding collaborative workshops or
meetings with stakeholders, end-users, and development teams. Everyone
participates in the sessions to discuss and define the project's objectives,
requirements, and functionalities.

 Requirements Gathering: During JAD sessions, participants share their


perspectives and needs for the software. This helps in gathering comprehensive
and accurate requirements from various stakeholders, reducing
misunderstandings and ensuring everyone's input is considered.

 Iterative Approach: JAD follows an iterative approach, where participants


review and refine the project's requirements and designs in multiple sessions. This
allows for continuous improvement and ensures that the final solution aligns with
the stakeholders' vision.

 Real-Time Feedback: JAD allows for real-time feedback and clarification of


requirements. Developers can ask questions, and stakeholders can provide
immediate responses.
 Cross-Functional Collaboration: JAD brings together representatives from
different functional areas, such as business, IT, and end-users.

26
 Time and Cost Efficiency: JAD reduces the time required for requirements
gathering and design phases as it promotes focused and intense collaboration.
This efficiency can lead to cost savings by preventing rework.

 Documentation: Detailed documentation is created during JAD sessions,


capturing the requirements, decisions, and design choices. This documentation
serves as a reference for the development team and ensures a clear understanding
of the project's scope.
Advantages:-
1. Improved delivery time
2. Cost reduction
3. Better understanding
4. Improved quality
5. Lower risks
6. Faster progress
Disadvantage:-
1. Require a significant commitment.
2. Difficult to align goal and maintained focus.

RAD (Rapid Application Development) Model


The RAD (Rapid Application Development) model is a software development
methodology that focuses on quickly building and delivering software systems
through iterative development and prototyping. It emphasizes active user
involvement, iterative feedback, and rapid delivery.

27
Phases of RAD Model:-

1. Business Modelling: The information flow among business functions is defined


by answering questions like what data drives the business process, what data is
generated, who generates it, where does the information go, who process it and
so on.
2. Data Modelling: The data collected from business modeling is refined into a
set of data objects (entities) that are needed to support the business. The
attributes (character of each entity) are identified, and the relation between these
data objects (entities) is defined.
3. Process Modelling: The information object defined in the data modeling phase
are transformed to achieve the data flow necessary to implement a business
function. Processing descriptions are created for adding, modifying, deleting, or
retrieving a data object.
4. Application Generation: Automated tools are used to facilitate construction of
the software; even they use the 4th GL techniques.
5. Testing & Turnover: Many of the programming components have already
been tested since RAD emphasis reuse. This reduces the overall testing time.
But the new part must be tested, and all interfaces must be fully exercised.

Advantages:-

1. This model is flexible for change.


2. In this model, changes are adoptable.
3. Each phase in RAD brings highest priority functionality to the customer.
4. It reduced development time.
5. It increases the reusability of features.

Disadvantages:-

1. It required highly skilled designers.


2. All application is not compatible with RAD.
3. For smaller projects, we cannot use the RAD model.
4. On the high technical risk, it's not suitable.
5. Required user involvement.

28
Waterfall Model
The Waterfall model is a linear and sequential software development methodology
that follows a predefined set of steps in a specific order. It is called the "Waterfall"
model because the process flows downwards, like a waterfall, with each phase
cascading into the next.

Phases of Waterfall model:-


1. Requirements Gathering: It involves gathering and documenting all the
requirements of the software. This includes understanding the needs of the end-
users and stakeholders, identifying functionalities, and defining the scope of the
project.

2. System Design: The overall system architecture and design are created based on
the requirements gathered in the previous phase. The system design specifies how
different components and modules will work together, and outlines the data
structures, interfaces, and alsgorithms needed.

3. Implementation: It involves actual coding or programming based on the system


design. Developers write the code according to the specifications provided in the
design phase. This is the stage where the software is built and developed.

4. Testing: Once the implementation is complete, the software goes through a


testing phase. Testers verify that the software meets the specified requirements
and performs as expected. Various types of testing, such as unit testing,
integration testing, and system testing, are conducted to identify and fix any
defects or bugs.
s

29
5. Deployment: Once the software has been thoroughly tested and approved, it is
deployed or released to the end-users or clients. This phase involves installation,
configuration, and setting up the software in the production environment.

6. Maintenance: After the software is deployed, it enters the maintenance phase.


This phase involves monitoring and supporting the software in the live
environment, addressing any issues or bugs that may arise, and making necessary
updates or enhancements based on user feedback or changing requirements.

Advantages:-

1. This model is simple to implement also the number of resources that are required
for it is minimal.
2. The requirements are simple and explicitly declared; they remain unchanged
during the entire project development.
3. The start and end points for each phase is fixed, which makes it easy to cover
progress.
4. The release date for the complete product, as well as its final cost, can be
determined before development.
5. It gives easy to control and clarity for the customer due to a strict reporting
system.

Disadvantages:-

1. In this model, the risk factor is higher, so this model is not suitable for more
significant and complex projects.
2. This model cannot accept the changes in requirements during development.
3. It becomes tough to go back to the phase. For example, if the application has
now shifted to the coding phase, and there is a change in requirement, It becomes
tough to go back and change it.
4. Since the testing done at a later stage, it does not allow identifying the challenges
and risks in the earlier phase, so the risk reduction strategy is difficult to prepare.

30
Distributed System
It is collection of independent computers or nodes connected through a network.
These nodes work together to perform a common task or provide a unified service.
All the nodes in this system communicate with each other and handle processes in
tandem. Each of these nodes contains a small part of the distributed operating
system software.

Example:-
A distributed system is a web-based e-commerce platform like Amazon. The
platform consists of multiple servers distributed across different locations. When a
user visits the website, their request is processed by a distributed system, where
different servers work together to handle tasks. This distributed architecture allows
Amazon to handle a large number of users, provide scalability, and ensure fault
tolerance.
Advantages:-
1. Scalability: Distributed systems can scale horizontally by adding more nodes to
the network, allowing them to handle increased workloads and accommodate a
growing user base. This makes them suitable for applications that require high
scalability, such as social media platforms or cloud-based services.

2. Fault Tolerance: Distributed systems are designed to be fault-tolerant. If one


node fails, the system can continue to function using other available nodes.

3. Performance: By distributing tasks across multiple nodes, distributed systems


can improve overall performance. They can leverage parallel processing

31
capabilities, reducing the time required to perform complex computations or
process large amounts of data.

4. Geographic Distribution: Distributed systems can be geographically


distributed, enabling users from different locations to access services efficiently.
This allows for reduced latency, improved responsiveness, and better user
experiences.
Disadvantages:-
1. Complexity: Building and managing distributed systems can be complex due to
the need for communication protocols, data synchronization, and handling
network failures. Developing robust distributed systems requires specialized
knowledge and expertise.

2. Network Communication: Distributed systems heavily rely on network


communication for coordination and data sharing. Network latency or failures
can impact system performance and introduce challenges in maintaining
consistency across distributed nodes.

3. Data Consistency: Ensuring data consistency across distributed nodes can be


challenging. Synchronization mechanisms and protocols need to be implemented
to handle concurrent updates and maintain data integrity.

4. Cost: Setting up and maintaining distributed systems can be costly. It requires


investment in hardware, networking infrastructure, and ongoing maintenance.
Additionally, ensuring security and data privacy across distributed nodes can add
to the cost.

Centralized System
A centralized system refers to a computing or organizational structure where control
and decision-making authority are concentrated in a single central location or entity.
In this system, all data, resources, and decision-making processes are managed and
controlled by a central authority.
Example:-
A centralized system is a traditional banking system. In this system, a central bank
oversees and controls all banking operations, including account management,
32
transactions, and customer services. All customer data, financial records, and
decision-making processes are handled by the central bank, and individual branch
offices operate under the centralized control and supervision.

Advantages:-
1. Efficient Management: In a centralized system, a single authority can efficiently
manage and coordinate all operations. This centralized control enables
streamlined decision-making, resource allocation, and enforcement of policies. It
simplifies management processes and reduces duplication of efforts.

2. Standardization and Consistency: With a centralized system, it is easier to


enforce standard practices, policies, and procedures across the organization. This
leads to increased consistency and uniformity in operations, ensuring that all units
or branches follow the same guidelines and practices.

3. Enhanced Security: Centralized systems often provide better security measures


as they can implement robust security protocols at a central location. This allows
for stronger control over data access, authentication, and protection against
potential security threats.
Disadvantages:-
1. Single Point of Failure: One major disadvantage of a centralized system is that
it becomes a single point of failure. If the central authority experiences any
technical issues or disruptions, the entire system can come to a halt, affecting all
the associated units or branches.

33
2. Lack of Local Autonomy: Centralized systems can limit the autonomy of
individual units or branches. Decision-making power and flexibility may be
restricted, as all important decisions are made by the central authority. This can
lead to delays in responding to local needs or changes in the market.

3. Scalability and Growth Challenges: Centralized systems may face scalability


challenges when the organization grows or expands. As the volume of data and
operations increases, the central authority may struggle to handle the load
effectively. This can result in performance issues and slower response times.
Design and layers of Internet Based applications
Designing internet-based applications involves organizing the application's
components into layers to promote modularity, scalability, and maintainability. The
commonly used layered architecture for internet-based applications is the three-tier
architecture.
1. Presentation Layer (User Interface):-
The presentation layer is responsible for handling the user interface and user
interactions. It focuses on the visual presentation and user experience. This layer
includes components such as web browsers, mobile applications, user interfaces,
and presentation logic. Its primary role is to present data to the user and capture user
input.
2. Application Layer (Business Logic):
The application layer contains the business logic and application-specific
functionality. It processes the user's requests from the presentation layer, performs
necessary computations, and interacts with the data layer. This layer encapsulates
the core logic of the application, including validation, calculations, rules, and
algorithms. It ensures the proper functioning and behavior of the application.
3. Data Layer (Data Storage):-
The data layer handles the storage and retrieval of data. It includes databases, file
systems, or any other data storage mechanism. The data layer manages data
persistence, ensures data integrity, and handles data access and retrieval operations.
It provides the application layer with the necessary data to fulfill user requests and
updates the data based on application logic.

34
DFD (Data Flow Diagram)
A Data Flow Diagram (DFD) is a graphical representation of how data flows
through a system or a process. It helps to visualize how information moves from
one part of the system to another. It consists of various components, such as
processes, data stores, and data flows.
 Processes: These are the actions or operations that are performed on the data.
Processes are represented by circles or ovals. Each process takes inputs, does
some processing, and produces outputs.

 Data Stores: These are places where data is stored or retrieved from. Data stores
are represented by rectangles. They can be physical, like a database or a file
cabinet, or virtual, like a temporary memory location.

 Data Flows: These are arrows that represent the movement of data between
processes and data stores. Data flows show how information is transmitted from
one point to another within the system.

Levels of DFD
Data Flow Diagrams (DFDs) are used to represent how data flows through a system
or process. DFDs consist of different levels that help in progressively detailing the
data flow and system components.
1. Level 0 DFD (Context Diagram):-

 The Level 0 DFD, also known as the Context Diagram, provides an overview of
the entire system.

 It represents the highest level of abstraction and shows the system as a single
process or function, surrounded by its external entities (sources or sinks of data).

 The main purpose of the Context Diagram is to show how the system interacts
with its environment without diving into internal details.

35
2. Level 1 DFD:-

 The Level 1 DFD represents the major processes or sub-systems identified in the
Context Diagram.

 It breaks down the single process of the Context Diagram into more manageable
and understandable sub-processes.

 Level 1 DFD provides a clear picture of how data flows between the main
processes, data stores, and external entities.

3. Level 2 DFD:-

 The Level 2 DFD takes each sub-process from the Level 1 DFD and further
decomposes it into more detailed sub-processes.

 It shows a more refined view of the system, breaking down the processes into
smaller, manageable parts.

 Level 2 DFDs provide more insights into how data moves within each sub-
process and how data stores are used.

36
4. Level 3 DFD (and so on):-

 Depending on the complexity of the system, you can continue to create lower-
level DFDs like Level 3, Level 4, and so on.

 Each subsequent level breaks down processes from the higher-level DFD into
more detailed and granular sub-processes.

 These lower levels provide a step-by-step understanding of data flow and process
interactions.

Application Architecture
Application architecture refers to the way software applications are structured and
organized. It defines the components, their interactions, and the overall layout of
the application. A good application architecture ensures that the software is reliable,
scalable, maintainable, and meets its intended requirements.
1. Server-Based Architecture:-
In a server-based architecture, most of the processing and data management tasks
are performed on the server-side. The clients (users' devices, like computers or
smartphones) primarily act as display terminals and interact with the server to send
requests and receive responses. This type of architecture is common in web
applications, where the server handles the core logic and data processing, and the
client devices show the results.

2. Client-Based Architecture:-
In a client-based architecture, most of the processing and data management tasks
are performed on the client-side. The server primarily serves as a data store or a
source of information. The client devices handle the core logic and data processing,
reducing the need for continuous communication with the server. This architecture
is commonly used in desktop applications and mobile apps.

3. N-Tier Architecture:-
N-tier architecture, also known as multi-tier architecture, divides the software
application into multiple layers or tiers, each responsible for different aspects of the
application.
37
The most common n-tier architecture consists of three tiers:-
 Presentation Tier (or Client Tier): This is the user interface layer where the
client interacts with the application.
 Application Tier (or Business Logic Tier): This layer contains the business
logic and processing rules of the application.
 Data Tier (or Data Storage Tier): This layer deals with data storage and
retrieval, interacting with databases or other data sources.

The n-tier architecture provides modularity, flexibility, and easier maintenance as


different components are separated into distinct layers, promoting code reusability
and scalability.

38

You might also like