System Analysis and Design
System Analysis and Design
Diploma in IT
System Analysis and
Design
All rights reserved to: Matrix Institute of Information Technology (Pvt) Ltd.
System Analysis and Design 1
Higher Education Qualifications BCS/HEQ/DIP/SAD
Preface
This study material was solely prepared for the BCS Diploma or the Level 05
System Analysis and Design module. We have taken careful attention when
preparing this text to ensure that all relevant topics are covered up to the level of
their final BCS examination. Therefore the System Analysis and Design subject is
discussed under 7 sections.
Prepared By:
Table of Content
Chapter 1 – Introduction to System Analysis and Design ................................................... 04 – 09
1.1 Definition of System Analysis and Design ............................................................. 04 – 05
1.2 Introduction to System and Information System ................................................... 05 – 06
1.3 System Stakeholder .............................................................................................. 06 – 08
1.4 Role of Business Analyst, System Analyst and System Architect ........................ 08 – 09
The above figure shows the various stages involved in building an improved system.
System design is the process of planning a new business system or one to replace or
complement an existing system.
Analysis specifies what the system should do. Design states how to accomplish the
objective.
After the proposed system is analyzed and designed, the actual implementation of the
system occurs. After implementation, working system is available and it requires timely
maintenance. See the figure above.
Such a system (sometimes called as dynamic system) has three basic interacting
components or functions which are given below:
Input: involves capturing and assembling elements that enter the system to be
processed. For example: Raw materials, energy, data, and human effort must be
secured and organized for processing.
Processing: involves transformation process that converts input into output. Examples
are a manufacturing process, the human breathing process, or mathematical
calculations.
Examples:
• A manufacturing system accepts raw materials as inputs and produces finished
goods as output.
• An information system is a system that accepts resources (data) as input and
processes them into products (information) as output.
• A business organization is a system where economic resources are transformed
by various business processes into goods and services.
The system concept becomes even more useful by including two additional components,
Feedback and Control. A system with feedback and control components is sometimes
called as cybernetic system, that is self monitoring, self regulating system.
Feedback: is data about the performance of a system. For example, data about sales
performance is feedback to sales manager.
Control: involves monitoring and evaluating feedback to determine whether a system is
moving towards the achievement of its goal. The control function then makes necessary
adjustments to a system’s input and processing components to ensure that it produces
proper output. For example, a sales manager exercises control when reassigning sales
persons to new sales territories after evaluating feedback about their sales performance.
Example:
A business has many control activities such as where computers may monitor and
control manufacturing processes, accounting procedures help control financial systems,
data entry displays provide control of data entry activities, and sales quotas and sales
bonuses attempt to control sales performance.
1. Systems Owners
System owners usually come from the ranks of management. For medium to large
information systems, system owners are usually middle or executive managers. For
smaller systems, system owners may be middle managers or supervisors. System
owners tend to be interested in the bottom line—how much will the system cost? How
much value or what benefits will the system return to the business?
2. System Users
System users make up the vast majority of the information workers in any information
system. Unlike system owners, system users tend to be less concerned with costs and
benefits of the system.
System users are concerned with the functionality the system provides to their jobs and
the system’s ease of learning and ease of use.
b. External System Users the Internet has allowed traditional information system
boundaries to be extended to include other businesses or direct consumers as system
users. These external system users make up an increasingly large percentage of system
users for modern information systems. Examples include:
In addition to having formal systems analysis and design skills, a systems analyst must
develop or possess other skills, knowledge, and traits to complete the job. These
include:
systems, identifying options for improving business systems and bridging the needs of
the business with the use of IT."
Depending on the level of thinking about business analysis, the areas range from the
technical Business Analysis role (converting detailed business rules into system
requirements), to conversion of shareholder return and risk appetite into strategic plans.
The following section focuses on the IT sector perspective around business analysis,
where much of the deliverables are around requirements. The BA will record
requirements in some form of requirements management tool, whether a simple
spreadsheet or a complex application.
Business Requirements
Functional requirements
User (stakeholder) requirements
Quality-of-service (non-functional) requirements
Implementation (transition) requirements
There is rarely any detailed definition of the requirements, and many times, the real
reason for the request may not make good business sense. There tends to be no
emphasis on long term, strategic goals that the business wants to achieve via
Information Technology. The Business Analyst can bring structure and formalization of
requirements into this process, which may lead to increased foresight among Business
Owners.
In recent years, there has been an upsurge of using analysts of all sorts: business
analysts, business process analysts, risk analysts, system analysts. Ultimately, an
effective project manager will include Business Analysts who break down communication
barriers between stakeholders and developers
System Analyst
Systems architect
In systems engineering, the systems architect is the high-level designer of a system to
be implemented. The systems architect establishes the basic structure of the system,
defining the essential core design features and elements that provide the framework for
all that follows, and are the hardest to change later. The systems architect provides the
engineering view of the users' vision for what the system needs to be and do, and the
paths along which it must be able to evolve, and strives to maintain the integrity of that
vision as it evolves during detailed design and implementation.
System architecture is a sub-branch of systems engineering. Individuals working as
systems architects are the high-level designers of computer systems about to be
implemented. They help come up with the fundamental structure of the system, making
sure to outline the key design features that are more difficult to change later.
One way to think about what systems architects really do is to consider the role of
traditional architects who design homes, supermarkets, and museums while considering
aspects such as usability, security, and functionality. Both systems architects and
traditional architects come up with the vision and suggest how best to achieve it, the only
difference is that systems architects do this all in a computer.
Duties
Despite the lack of an accepted overall definition, the role of software architect generally
has certain common traits:
Design
The architect makes high-level design choices much more often than low-level choices.
In addition, the architect may sometimes dictate technical standards, including coding
standards, tools, or platforms, so as to advance business goals rather than to place
arbitrary restrictions on the choices of developers. Note that software architects rarely
deal with the physical architecture of the hardware environment, confining themselves to
the design methodology of the code.
Architects also have to communicate effectively, not only to understand the business
needs, but also to advance their own architectural vision. They can do so verbally, in
writing, and through various software architectural models that specialize in
communicating architecture.
The Systems Development Life Cycle (SDLC) is a conceptual model used in project
management that describes the stages involved in an information system development
project from an initial feasibility study through maintenance of the completed application.
Various SDLC methodologies have been developed to guide the processes involved
including the waterfall model (the original SDLC method), rapid application development
(RAD), joint application development (JAD), the fountain model and the spiral model.
Mostly, several models are combined into some sort of hybrid methodology.
Documentation is crucial regardless of the type of model chosen or devised for any
application, and is usually done in parallel with the development process. Some methods
work better for specific types of projects, but in the final analysis, the most important
factor for the success of a project may be how closely particular plan was followed.
Feasibility Study
The feasibility study is used to determine if the project should get the go-ahead. If the
project is to proceed, the feasibility study will produce a project plan and budget
estimates for the future stages of development.
Requirement Analysis
Analysis gathers the requirements for the system. This stage includes a detailed study of
the business needs of the organization. Options for changing the business process may
be considered.
System Design
Design focuses on high level design like, what programs are needed and how are they
going to interact, low-level design (how the individual programs are going to work),
interface design (what are the interfaces going to look like) and data design (what data
will be required). During these phases, the software's overall structure is defined.
Analysis and Design are very crucial in the whole development cycle. Any glitch in the
design phase could be very expensive to solve in the later stage of the software
development. Much care is taken during this phase. The logical system of the product is
developed in this phase.
Coding / Development
In this phase the designs are translated into code. Computer programs are written using
a conventional programming language or an application generator. Programming tools
like Compilers, Interpreters, and Debuggers are used to generate the code. Different
high level programming languages like C, C++, Pascal, Java are used for coding. With
respect to the type of application, the right programming language is chosen.
Testing
In this phase the system is tested. Normally programs are written as a series of
individual modules, this subject to separate and detailed test. The system is then tested
as a whole. The separate modules are brought together and tested as a complete
system. The system is tested to ensure that interfaces between modules work
(integration testing), the system works on the intended platform and with the expected
volume of data (volume testing) and that the system does what the user requires
(acceptance/beta testing).
Implementation
Introducing the new system into customers’ working environment. The major milestone
for product implementation is successful integration of source code components into a
functioning system. During implementation all the programs of the system are loaded
onto the user’s computer.
Maintenance
Inevitably the system will need maintenance. Software will definitely undergo change
once it is delivered to the customer. There are many reasons for the change. Change
could happen because of some unexpected input values into the system. In addition, the
changes in the system could directly affect the software operations. The software should
be developed to accommodate changes that could happen during the post
implementation period.
Feasibility
Study
Analysis
Design
Coding
Testing
Implementation
System Analysis and Design 12
The first published model of the software development process was derived from other
engineering processes (Royce, 1970).The principal stages of the model map onto
fundamental development activities:
In principle, the result of each phase is one or more documents which approved (‘signed
off’). The following phase should not start until the previous phase has finished. In
practice, these stages overlap and feed information to each other. During design,
problems with requirements are identified, during coding design problems are found and
so on. The software process is not a simple linear model but involves a sequence of
Iterations of the development activities.
Because of the costs of producing and approving documents, iterations are costly and
involve significant rework. Therefore, after a small number of iterations, it is normal to
freeze parts of the development, such as the specification, and to continue with the later
development stages. Problems are left for later resolution, ignored or are programmed
around. This premature freezing of requirements may mean that the system won’t do
what the user wants. It may also lead to badly structured systems as design problems
are circumvented by implementation tricks.
During the final life-cycle phase (operation and maintenance) the software is put into
use. Errors and omissions in the original software requirements are discovered. Program
and design errors emerge and the need for new functionality is identified. The system
must therefore evolve to remain useful. Making these changes (software maintenance)
may involve repeating some or all previous process stages.
The problem with the waterfall model is its inflexible partitioning of the project into these
distinct stages. Commitments must be made at an early stage in the process and this
means that it is difficult to respond to changing customer requirements. Therefore, the
waterfall model should only be used when the requirements are well understood.
However, the waterfall model reflects engineering practice. Consequently, software
processes based on this approach are still used for software development, particularly
when this is part of a larger systems engineering project.
The important distinction between the spiral model and other software process models is
tile explicit consideration of risk in the spiral model. Informally, risk is simply something
which can go wrong. For example, if the intention is to use a new programming
language, a risk is that the available compilers are unreliable or do not produce
sufficiently efficient object code. Risks result in project problems such as schedule and
cost overrun, so risk minimization is a very important project management activity.
A cycle of the spiral begins by elaborating objectives such as performance, functionality,
etc. Alternative ways of achieving these objectives and the constraints imposed on each
of these alternatives are then enumerated. Each alternative is assessed against each
objective. This usually results in the identification of sources of project risk.
The next step is to evaluate these risks by activities such as more detailed analysis,
prototyping, simulation, etc. Once risks have been assessed, some development is
carried out and this is followed by a planning activity for the next phase of the process.
There are no fixed phases such as specification or design in the spiral model. The spiral
model encompasses other process models. Prototyping may be used in one spiral to
resolve requirements uncertainties and hence reduce risk. This may be followed by a
conventional waterfall development. Formal transformation may be used to develop
those parts of the system with high security requirements.
RAD is used primarily for information systems applications; the RAD approach
encompasses the following phases:
Business modeling
The information flow among business functions is modeled in a way that answers the
following questions:
1. What information drives the business process?
2. What information is generated?
3. Who generates it?
4. Where does the information go?
5. Who processes it?
Data modeling
The information flow defined as part of the business modeling phase is refined into a set
of data objects that are needed to support the business. The characteristics (called
attributes) of each object are identified and the relationships between these objects are
defined.
Process modeling
The data objects defined in the data-modeling phase are transformed to achieve the
information flow necessary to implement a business function. Processing descriptions
are created for adding, modifying, deleting, or retrieving a data object.
Application generation
RAD assumes the use of the RAD fourth generation techniques and tools like VB, VC++,
Delphi etc rather than creating software using conventional third generation
programming languages. The RAD works to reuse existing program components (when
possible) or create reusable components (when necessary). In all cases, automated
tools are used to facilitate construction of the software.
Methodologies
SSADM Stages
The SSADM method involves the application of a sequence of analysis, documentation
and design tasks concerned with:
1. Analysis of the current system (Also known as: feasibility stage)
Analyze the current situation at a high level. A DFD (Data Flow Diagram) is used to
describe how the current system works and to visualize known problems.
4. Logical data design (Also known as: logical system specification stage)
In this stage, technically feasible options are chosen. The development/implementation
environments are specified based on this choice.
The following steps are part of this stage:
Define TSOs - Its purpose is to identify and define the possible approaches to the
physical implementation to meet the function definitions. It also validates the
service level requirements for the proposed system in the light of the technical
environment.
Select TSO - This step is concerned with the presentation of the TSOs to users
and the selection of the preferred option.
5. Logical process design (Also known as: logical system specification stage)
In this stage, logical designs and processes are updated. Additionally, the dialogs are
specified as well.
The following steps are part of this stage:
Define user dialogue - This step defines the structure of each dialogue required
to support the on-line functions and identifies the navigation requirements, both
within the dialogue and between dialogues.
Define update processes - This is to complete the specification of the database
updating required for each event and to define the error handling for each event.
Define enquiry processes - This is to complete the specification of the database
enquiry processing and to define the error handling for each enquiry.
6. Physical design
The objective of this stage is to specify the physical data and process design, using the
language and features of the chosen physical environment and incorporating installation
standards.
The following activities are part of this stage:
Prepare for physical design - Learn the rules of the implementation environment;
review the precise requirements for logical to physical mapping; and plan the
approach.
Complete the specification of functions - Incrementally and repeatedly develop
the data and process designs.
Logical Data Modeling - This is the process of identifying, modeling and documenting
the data requirements of the system being designed. The data are separated into entities
(things about which a business needs to record information) and relationships (the
associations between the entities).
Entity Event Modeling - This is the process of identifying, modeling and documenting
the events that affect each entity and the sequence in which these events occur.
5. Eventually the new system takes physical shape according to factors like the
existing IT platform it will have to operate on, the data model adopted by the
organization and organizational procedures favored within the corporate culture.
On this module it is unlikely that you will be required to go as far as drawing a full
required physical DFD, although you should develop the ability to deduce the main
physical implications of implementing a system you design.
This is not a systems analysis module. The purpose of learning about DFDs is that most
managers will at some time be expected to participate in a systems analysis and design
exercise. Some will initiate such exercises. Either way, a manager needs tools to enable
the existing system to be analyzed, and a potential new system to be visually sketched.
The DFD performs these functions. It is not the only tool used, and it has some serious
limitations. Nevertheless, it is the most important one, and probably the first one to be
used.
Components of a DFD
There are four basic elements. The notation used here is that employed by the
Structured System And Design Method (SSADM). Some textbooks use a different
notation, but are easily comparable with this one.
External entity
This is whatever or whoever donates data/information to the system or receives
data/information from it. (From now on, for simplicity, the term data will be used
for either data or information). An external entity therefore resides outside the
boundaries of the system, meaning that the system has no formal control over
data issued by one, or over data once it has been received by one.
Process
A process transforms or manipulates data within a system. Each process box
contains the name and very brief description of the process (like 'Do this', 'Do
that'), a numerical identifier, and a location (e.g. 'Manager', 'Accountant',
'Production Department'). In order to show a process in more detail we need
a different modeling technique. There are three main ones, which are
Decision Tables, Decision Trees and Pseudo code (Structured English).
Typical processes are recording data, classifying and coding data, checking,
sorting documents into some order, collating documents, merging one batch of
documents with another, matching one document with another, calculating, creating a
new document, and so on).
Data Store
A data store is where data is held for a time within the system, probably
awaiting the arrival of more data before it can be processed, or awaiting
being used to produce a report or before being transmitted to some other
system). In a Current Physical DFD, data stores represent real-world stores
such as computer files, card indexes, ledgers and so on. The store symbol is deliberately
simple. There should be no intention to show 'How' the data is structured. One Data
Store may eventually yield a database of several files. During system design, the
techniques of Logical Data Structuring (LDS) and Normalization may be employed to do
this.
Data Flow
A data flow represents a set of data being transmitted between objects on the DFD. It is
important to note the following:
Data ALWAYS flows to or from a process. The other end may be an external entity, a
data store or another process. This means data CANNOT flow from Data Store to
another. This is obvious when you think about it - a process needs to be performed to
make the data flow.
A data flow is just a line with an arrow showing the direction of flow and a name. Data
flows to and from a store can be shown as two way.
The next
stage is to break this down into a number of basic processes like this:
This is called a level 1 DFD. Typically a level 1 DFD shows up to 6 processes, although
there is no formal limit. Note: that the same data store or an external entity may be
shown more than once on a DFD. This is signaled by the additional bar on the symbol.
This is feasible because the DFD does not attempt to show time dependency of data
flows. (There is a technique to do this called the Entity Life History). Likewise, the
numbers are used for identification only.
Note that the same location may be shown on successive processes (e.g. Order Clerk in
the case of processes 1 and 3). This may be the case even where both processes are
carried out at the same time. The reason for breaking them down is to show that a
decision is taken and the second process is dependent on that. In this case, for example,
not ALL orders result in activation of the reorder process - only those where the order
requires new stock to be ordered.
On a required logical DFD, the location box is often left empty.
Any of these processes can be decomposed to a higher level, and then higher again. For
example, box 1 of the level one DFD for order processing DFD we have just introduced
might be decomposed like this:-
At any level, the data flows and data stores must balance (you cannot introduce new
flows into or out of the process). Also, it is easy to tell which process has been
decomposed by the numbering system. E.g. Sub processes of Process 2 are 2.1, 2.2,
and so on.
At this level, we see that the DFD is incomplete; it does not tell the whole story. For
example, the customer does not seem to be notified of the acceptance of the order. The
DFD does not tell us what happens if a credit check fails. The stock file does not seem to
be updated at all in the system. It is commonsense that if data is shown being read from
a data store but not written to it, then either the
system is incomplete, or that the process of
updating the file lies outside the boundaries of
the system (which sometimes is most unlikely).
Conventions
1. To help interpret a situation, choose symbols, scenes or images that represent the
situation. Use as many colours as necessary and draw the symbols on a large
piece of paper. Try not to get too carried away with the fun and challenge to your
ingenuity in finding pictorial symbols.
2. Put in whatever connections you see between your pictorial symbols: avoid
producing merely an unconnected set. Places where connections are lacking may
later prove significant.
3. Avoid too much writing, either as commentary or as ‘word bubbles’ coming from
people’s mouths (but a brief summary can help explain the diagram to other
people).
4. Don’t include systems boundaries or specific references to systems in any way
(see below).
Root
Definition
A root definition is expressed as a transformation process that takes some entity as
input, changes or transforms that entity, and produces a new form of the entity as output.
Root definitions are written as sentences that elaborate a transformation. There are six
elements that make up a well formulated root definition, which are summed up in the
mnemonic CATWOE.
Customer: Everyone who stands to gain benefits from a system is considered as
a customer of the system. If the system involves sacrifices such as layoffs, then
those victims must also be counted as customers.
Actor: The actors perform the activities defined in the system.
Transformation process: This is shown as the conversion of input to output.
Weltanschauung: The German expression for world view. This world view makes
the transformation process meaningful in context.
Owner: Every system has some proprietor, who has the power to start up and
shut down the system.
Environmental constraints: External elements exist outside the system which it
takes as given. These constraints include organizational policies as well as legal
and ethical matters.
Following is an example of a root definition, which has the concept of CATWOE
C candidate students
A university staff
T candidate students transformed into degree holders
W the belief that awarding degrees and diplomas is a good way of
demonstrating the qualities of candidates to potential employers
Conceptual Model
Conceptual models demonstrate potential activities and their logical dependencies.
The activities, which must be expressed in a verb noun phrase
e.g. 'do something,' 'eat dinner; ' open new factory' etc are placed in rough, hand
drawn bubbles.
The bubbles may be joined by arrows, indicating dependence: - that one activity is
consequent upon another - it cannot be performed, unless the other has been
performed, or that it will be done poorly if the other is done poorly.
These methods were developed during 1990s. Most Agile methods attempt to minimize
the risk of developing software in short time boxes, called iterations which typically last
for a few weeks. Each iteration is likes miniature software project of its own, and includes
all the tasks necessary to release the mini increment of new functionality.
While an increment may not add enough functionality to warrant releasing the product,
an agile software project intends to be capable of releasing new software at the end of
every iteration. At the end of each iteration, the team reevaluates each project priority.
Allow the development team to focus on software itself rather than on its design and
documentation. Further supports business application development where the system
requirements change rapidly during the development process. Best suited for small or
medium sized business systems and personal computer products.
Following are some examples for agile methodologies:
Feasibility study
In this phase the problem is defined and the technical feasibility of the desired
application is verified. Apart from these routine tasks, it is also checked whether the
application is suitable for Rapid Application Development (RAD) approach or not. Only if
the RAD is found as a justified approach for the desired system, the development
continues.
Business study
In this phase the overall business study of the desired system is done. The business
requirements are specified at a high level and the information requirements out of the
system are identified. Once this is done, the basic architectural framework of the desired
system is prepared.
The systems designed using Rapid Application Development (RAD) should be highly
maintainable, as they are based on the incremental development process . The
maintainability level of the system is also identified here so as to set the standards for
quality control activities throughout the development process.
Dynamic System Development Method (DSDM) assumes that all previous steps may be
revisited as part of its iterative approach. Therefore, the current step need be completed
only enough to move to the next step, since it can be finished in a later iteration. This
premise is that the business requirements will probably change anyway as
understanding increases, so any further work would have been wasted.
According to this approach, the time is taken as a constraint i.e. the time is fixed,
resources are fixed while the requirements are allowed to change. This does not follow
the fundamental assumption of making a perfect system the first time, but provides a
usable and useful 80% of the desired system in 20% of the total development time. This
approach has proved to be very useful under time constraints and varying requirements.
Principles of DSDM
There are 9 underlying principles of DSDM consisting of four foundations and five
starting-points for the structure of the method. These principles form the cornerstones of
development using DSDM.
1. User involvement is the main key in running an efficient and effective project,
where both users and developers share a workplace, so that the decisions can
be made accurately.
2. The project team must be empowered to make decisions that are important to the
progress of the project, without waiting for higher-level approval.
7. The high level scope and requirements should be base-lined before the project
starts.
8. Testing is carried out throughout the project life-cycle. This has to be done in
order to avoid an expensive extra cost in fixing and maintaining the system before
delivery.
9. Communication and cooperation between all project stakeholders is an important
prerequisite for running an efficient and effective project.
Criticism
A methodology is only as effective as the people involved, Agile does not solve
this
Often used as a means to bleed money from customers through lack of defining a
deliverable
Lack of structure and necessary documentation
Only works with senior-level developers
Incorporates insufficient software design
Requires meetings at frequent intervals at enormous expense to customers
Requires too much cultural change to adopt
Can lead to more difficult contractual negotiations
Can be very inefficient—if the requirements for one area of code change through
various iterations, the same programming may need to be done several times
over. Whereas if a plan were there to be followed, a single area of code is
expected to be written once.
Impossible to develop realistic estimates of work effort needed to provide a quote,
because at the beginning of the project no one knows the entire
scope/requirements
Can increase the risk of scope creep due to the lack of detailed requirements
documentation
What is a prototyping?
This can be defined as a process of constructing a prototype.
What is a prototype?
A prototype is a model or a simulation or can be even considered as a
benchmark of the entire system
Advantages of prototyping
Prototypes may be easily changed or even discarded.
Prototyping may improve communication between and among developers and
customers
Users may be more satisfied with systems developed using prototyping.
A prototype may serve as a marketing tool.
A prototype may serve as the basis for operational specifications.
Early visibility of the prototype may help management assess progress.
Prototypes may demonstrate progress at an early stage of development.
Prototypes may provide early training for future users of the system.
Prototyping may reduce misunderstandings between and among developers and
customers.
Prototyping may reduce redesign costs if problems are detected early when they
are cheap to fix.
Prototyping may require less effort (43% less, according to Boehm, Gray &
Seewaldt, 1984) than conventional development.
Prototyping may result in a product that is a better fit for the customer's
requirements.
Prototyping may save on initial maintenance costs because, in effect, customers
are doing "acceptance testing" all along the way.
Prototyping may strengthen requirements specifications.
Users may understand prototypes better than paper specifications.
Disadvantages of prototyping
Prototyping may encourage an excess of change requests.
Working prototypes may lead management and customers to believe that the final
product is almost ready for delivery.
The excellent (or disappointing) performance characteristics of prototypes may
mislead the customer.
Developers may have difficulty writing the back-end code needed to support the
slick front-end interface designed with the prototyping tool.
During prototyping, the only "design specification" is the prototype itself, which
may allow uncontrolled change.
Early prototypes may be of low fidelity, dismissed as toys.
Hi-fidelity prototypes may be mistaken for a real product.
Important system characteristics (e.g., performance, security, robustness and
reliability) may have been ignored during prototype development.
Prototypes of complex systems may themselves be rather complex.
Prototyping is an adaptive process that may not exhibit well-defined phases.
1. Throwaway Prototyping
Throwaway or Rapid Prototyping is the most easily understood prototyping method. After
preliminary requirements gathering is accomplished, a simple working model of the
system is constructed to visually show the users what their requirements may look like
when they are implemented into a finished system.
Rapid Prototyping involves creating a working model of various parts of the system at a
very early stage, after a relatively short investigation. The method used in building it is
usually quite informal, the most important factor being the speed with which the model is
provided. The model then becomes the starting point from which users can re-examine
their expectations and clarify their requirements. When this has been achieved, the
prototype model is 'thrown away', and the system is formally developed based on the
identified requirements.
There are many strong reasons for using Throwaway Prototyping. The most obvious one
is that it can be done quickly. If the users can get quick feedback on their requirements,
they may be able to refine them early in the development of the software. In addition, the
speed is crucial since with a limited budget of time and money, the majority must be
spent on the most difficult task: coding the system. By creating prototypes quickly, the
time saved can be very useful.
Making changes early in the development lifecycle is also extremely cost effective since
there is nothing at that point to redo. If a project is changed after a considerable amount
of work has been done, small changes could require very large efforts to institute since
software systems have many dependencies.
Strength of Throwaway Prototyping is its ability to construct interfaces that the users can
test. The user interface is what the user sees as the system, and by seeing it in front of
them, grasping how the system will work is a much easier task.
It is asserted that revolutionary rapid prototyping is a more effective manner in which to
deal with user requirements-related issues, and therefore a greater enhancement to
software productivity overall.
Requirements can be identified, simulated, and tested far more quickly and cheaply
when issues of resolvability,
maintainability, and software
structure are ignored. This, in turn,
leads to the accurate specification of
requirements and the subsequent
construction of a valid and useable
system from the user's perspective
via conventional software
development models.
Not exactly the same as Throwaway Prototyping, but certainly in the same family is the
usage of storyboards, animatics, or drawings to show how the system will look. These
representations, although not functional, can be very useful.
2. Evolutionary Prototyping
Evolutionary Prototyping is quite different from Throwaway Prototyping. The main goal
when using Evolutionary Prototyping is to build a very robust prototype in a structured
manner and constantly refine it. "The reason for this is that the Evolutionary prototype,
when built, forms the heart of the new system, and the improvements and further
requirements will be built on to it."
When developing a system using Evolutionary Prototyping, the system is continually
refined and rebuilt. "Evolutionary prototyping acknowledges that we do not understand all
the requirements and builds only those that are well understood.” This technique allows
the development team to add features, or make changes that couldn't be conceived
during the requirements and design phase.
Evolutionary Prototyping has a very big advantage over Throwaway Prototyping. The
advantage is in the fact that evolutionary prototypes are functional systems. Although
they may not have all the features the users have planned, they may be used on an
interim basis until the final system is delivered.
"It is not unusual within a prototyping environment for the user to put an initial prototype
to practical use while waiting for a more developed version” The user may decide that a
system with minimal details is better than no system at all.
Also important to note, is that when using Evolutionary Prototyping, developers can focus
themselves to develop parts of the system that they understand instead of working on
developing a whole system.
To minimize risk, the developer does not
implement poorly understood features. The
partial system is sent to customer sites. As users
work with the system, they detect opportunities
for new features and give requests for these
features to developers.
4. Incremental Prototyping
The incremental approach can be likened to 'building blocks'; incrementing each time a
new component is added or integrated, based on an overall design solution. When all of
the components are in place, the solution is complete.
An advantage of this method is that the client and/or end-users have the opportunity to
test the developed components and their functionality. They also have opportunities to
provide feedback while other components are still in development, and can thus
influence the outcome of further development.
Example: in a new word processing application a user may be able to work with the
interface to open and save documents, but may not be able to print those documents or
make changes fonts or styles because these components have yet to be delivered. The
client and/or end-users are able to provide feedback on the components developed so
far. This may influence also how further components are implemented.
Requirements analysis can be a long and arduous process during which many delicate
psychological skills are involved. New systems change the environment and
relationships between people, so it is important to identify all the stakeholders, take into
account all their needs and ensure they understand the implications of the new systems.
Analysts can employ several techniques to elicit the requirements from the customer.
Historically, this has included such things as holding interviews, or holding focus groups
(more aptly named in this context as requirements workshops) and creating
requirements lists. More modern techniques include prototyping, and use cases. Where
necessary, the analyst will employ a combination of these methods to establish the exact
requirements of the stakeholders, so that a system that meets the business needs is
produced.
In requirements engineering, requirements elicitation is the practice of obtaining the
requirements of a system from users, customers and other stakeholders. The practice is
also sometimes referred to as requirements gathering.
Requirements elicitation is non-trivial because you can never be sure you get all
requirements from the user and customer by just asking them what the system should
do. Requirements elicitation practices include interviews, questionnaires, user
observation, workshops, brain storming, use cases, role playing and prototyping.
Requirements elicitation is a part of the requirements engineering process, usually
followed by analysis and specification of the requirements.
Solicit participation from many people so that requirements are defined from
different points of view; be sure to identify the rationale for each requirement that
is recorded.
Identify ambiguous requirements as candidates for prototyping.
Create usage scenarios to help customers/users better identify key requirements.
Stakeholder Analysis
Stakeholder analysis in conflict resolution, project management, and business
administration, is the process of identifying the individuals or groups that are likely to
affect or be affected by a proposed action, and sorting them according to their impact on
the action and the impact the action will have on them. This information is used to
assess how the interests of those stakeholders should be addressed in a project plan,
policy, program, or other action. Stakeholder analysis is a key part of stakeholder
management.
Stakeholder analysis is a term that refers to the action of analyzing the attitudes of
stakeholders towards something (most frequently a project). It is frequently used during
the preparation phase of a project to assess the attitudes of the stakeholders regarding
the potential changes. Stakeholder analysis can be done once or on a regular basis to
track changes in stakeholder attitudes over time.
Stakeholder identification
See Stakeholder analysis for a discussion of business uses. Stakeholders (SH) are
persons or organizations (legal entities such as companies, standards bodies) which
have a valid interest in the system. They may be affected by it either directly or indirectly.
A major new emphasis in the 1990s was a focus on the identification of stakeholders. It
is increasingly recognized that stakeholders are not limited to the organization employing
the analyst. Other stakeholders will include:
anyone who operates the system (normal and maintenance operators)
anyone who benefits from the system (functional, political, financial and social
beneficiaries)
Anyone involved in purchasing or procuring the system. In a mass-market
product organization, product management, marketing and sometimes sales act
as surrogate consumers (mass-market customers) to guide development of the
product
organizations which regulate aspects of the system (financial, safety, and other
regulators)
people or organizations opposed to the system (negative stakeholders; see also
Misuse case)
organizations responsible for systems which interface with the system under
design
those organizations who integrate horizontally with the organization for whom the
analyst is designing the system
Therefore, stakeholder analysis has the goal of developing cooperation between the
stakeholder and the project team and, ultimately, assuring successful outcomes for the
project. Stakeholder analysis is performed when there is a need to clarify the
consequences of envisaged changes or at the start of new projects and in connection
with organizational changes generally. It is important to identify all stakeholders for the
purpose of identifying their success criteria and turning these into quality goals.
3 SPECIFIC REQUIREMENTS
3.1 External Interface Requirements
3.1.1 User Interfaces
3.1.2 Hardware Interfaces
3.1.3 Software Interfaces
3.1.4 Communications Protocols
3.1.5 Memory Constraints
3.1.6 Operation
3.1.7 Product function
3.1.8 Assumption and Dependency
3.2 Software Product Features
3.3 Software System Attributes
3.3.1 Reliability
3.3.2 Availability
3.3.3 Security
3.3.4 Maintainability
3.3.5 Portability
3.3.6 Performance
3.4 Database Requirements
3.5 Other Requirements
4 ADDITIONAL MATERIALS
Types of Requirements
Requirements are categorized in several ways. The following are common
categorizations of requirements that relate to technical management
Customer Requirements
Statements of fact and assumptions that define the expectations of the system in terms
of mission objectives, environment, constraints, and measures of effectiveness and
suitability (MOE/MOS). The customers are those that perform the eight primary functions
of systems engineering, with special emphasis on the operator as the key customer.
Operational requirements will define the basic need and, at a minimum, answer the
questions posed in the following listing:
Operational distribution or deployment: Where will the system be used?
Mission profile or scenario: How will the system accomplish its mission
objective?
Performance and related parameters: What are the critical system
parameters to accomplish the mission?
Utilization environments: How are the various system components to be
used?
Effectiveness requirements: How effective or efficient must the system be
in performing its mission?
Operational life cycle: How long will the system be in use by the user?
Environment: What environments will the system be expected to operate
in an effective manner?
Functional Requirements
Functional requirements explain what has to be done by identifying the necessary task,
action or activity that must be accomplished. Functional requirements analysis will be
used as the toplevel functions for functional analysis.
Non-functional Requirements
Non-functional requirements are requirements that specify criteria that can be used to
judge the operation of a system, rather than specific behaviors.
Structural Requirements
Structural requirements explain what has to be done by identifying the necessary
structure of a system.
Architectural Requirements
Architectural requirements explain what has to be done by identifying the necessary
system architecture (structure + behavior + ...) of a system.
Performance Requirements
The extent to which a mission or function must be executed; generally measured in
terms of quantity, quality, coverage, timeliness or readiness. During requirements
analysis, performance (how well does it have to be done) requirements will be
interactively developed across all identified functions based on system life cycle factors;
and characterized in terms of the degree of certainty in their estimate, the degree of
criticality to system success, and their relationship to other requirements.
Design Requirements
The “build to,” “code to,” and “buy to” requirements for products and “how to execute”
requirements for processes expressed in technical data packages and technical
manuals.
Derived Requirements
Requirements that are implied or transformed from higher-level requirement. For
example, a requirement for long range or high speed may result in a design requirement
for low weight.
Allocated Requirements
A requirement that is established by dividing or otherwise allocating a high-level
requirement into multiple lower-level requirements. Example: A 100-pound item that
consists of two subsystems might result in weight requirements of 70 pounds and 30
pounds for the two lower-level items.
A project manager has to balance the project scope against the constraints of schedule,
budget, staff resources, and quality goals. One balancing strategy is to drop or defer low
priority requirements to a later release when you accept new, higher priority
requirements or other project conditions change. If customers don’t differentiate their
requirements by importance and urgency, the project manager must make these trade-
off decisions. Because customers may not agree with the project manager’s decisions,
they must indicate which requirements are critical and which can wait. Establish priorities
early in the project, while you have more options available for achieving a successful
project outcome.
Cost – With an eye toward funding, this approach may be implemented a number
of ways—implementing the least expensive requirements first or first
implementing requirements with the greatest ROI (return on investment). 4
Risk – This approach prioritizes the riskiest requirements first, with the logic that
should they fail, the project can be abandoned with a minimum of investment.
This approach often makes sense when a controversial or untested initiative is
planned.
Regulatory Compliance – With this approach, the requirements that are needed
to meet legal and/or regulatory requirements are given highest priority. If an
organization has a high priority (for marketing or legal reasons) to incorporate
certain regulation such as Section 508 compliance, requirements that force
accordance with section 508 would be given highest priority.
The above business considerations will factor in when one employs a prioritization
method to logically rank requirements’ priority.
Kano Analysis – Developed by Noriaki Kano, the goal of this method is to isolate
customer requirements from incremental requirements. This marketing-savvy method
assigns one of four categories to each requirement (each of which has a strong focus on
the customer’s perspective):
(1) Surprise and delight
(2) More is better
(3) Must be
(4) Better not be.
The Money Game – Coined by Larimar Consulting, this method has the analyst
assemble stakeholders and give them each a certain amount of currency (board game
dollars or coins, for example). Each stakeholder has fewer dollars than the project has
requirements, and the stakeholders must spend their dollars on the requirement(s) that
they want most. Once the analyst has counted the money, she can determine which
requirements have the highest value to the group.
100-Point Method – In this method, each stakeholder is given 100 points to “spend” on
the requirements set any way they wish. For example, if a stakeholder strongly feels that
only two requirements are truly needed, he can spend 50 points on each. However, if
another stakeholder feels that 10 requirements are needed, but that two are more
important than the others, she might spend 5 points each on the 8 less-important ones,
and 30 points each on the two that are more important in her view
Theory W – This method uses negotiation and continual risk assessment to ensure that
every stakeholder has a “win” in the project—a requirement or requirements that he or
she feels strongly about deploying on schedule. To accomplish this, each stakeholder
must privately rank the entire list of requirements and decide which are truly most
important and, and among those, which they might be willing to give up. Then as a
group, requirement priorities are negotiated and subsequently protected from risks
throughout the project.
GAP Analysis
In information technology, gap analysis is an assessment tool to help identify differences
between information systems or applications. A gap is sometimes called "the space
between where we are and where we want to be." A gap analysis helps bridge that
space by highlighting which requirements are being met and which are not. The tool
provides a foundation for measuring the investment of time, money and human
resources that's required to achieve a particular outcome.
In software development, for instance, a gap analysis can be used to document which
services and/or functions have been accidentally left out, which ones have been
deliberately eliminated and which still need to be developed. In compliance, a gap
analysis can be used to compare what is required by law to what is currently being done.
stakeholder needs, while other techniques are most helpful in defining high-level and
detailed requirements, or validating detailed requirements with the stakeholders.
Examples include;
Interview Document Analysis Requirements
Observation Focus Group Workshop
Questionnaires Interface Analysis Reverse Engineering
JAD Prototyping Survey
Brainstorming
It is more suitable to ask open-ended questions in an interview rather than closed ended questions.
Open-ended questions allow the interviewee to respond in any way that seems appropriate. An
example of an open-ended question is ‘Why are you dissatisfied with the report of the accounts?”
Advantages
Generally easy, because it can be done with minimal preparation.
Interviews of individuals and small groups require less planning and scheduling effort than large
workshops.
Interviews of individuals and small groups require less stakeholder commitment than large
workshops.
Interviews provide an opportunity to explore or clarify topics in more detail.
Disadvantages
The questions used in the interview may reflect the interviewer’s preconceived ideas, which can
influence the responses.
For projects with a large number of stakeholders the interviews technique can be time-consuming
and inefficient.
Conflicts and inconsistencies between stakeholder information need to be resolved in additional
interviews.
This technique does not allow different stakeholders to hear and elaborate upon the information
being relayed.
Observation Technique
The study of users in their natural habitats is what observation is about. By observing users, an
analyst can identify a process flow, awkward steps, pain points and opportunities for improvement.
Observation can be passive or active (asking questions while observing). Passive observation is
better for getting feedback on a prototype (to refine requirements), where active observation is more
effective at getting an understanding of an existing business process. Either approach can be used
to uncover implicit requirements that otherwise might go overlooked.
Disadvantages
Because people usually feel uncomfortable when being watched, they may Unwittingly
perform differently when being observed.
The work being observed may not involve the level of difficulty or volume normally
experienced during that time period.
Some systems activities may take place at odd times, causing a scheduling inconvenience
for the systems analyst.
The tasks being observed are subject to various types of interruptions.
If people have been performing tasks in a manner that violates standard
operating procedures, they may temporarily perform their jobs correctly while you are
observing them. In other words, people may let you see what they want you to see.
Questionnaires
Questionnaires are special-purpose documents that allow the analyst to collect information and
opinions from respondents. The document can be mass-produced and distributed to respondents,
who can then complete the questionnaire on their own time. Questionnaires allow the analyst to
collect facts from a large number of people while maintaining uniform responses. When dealing with
a large audience, no other fact-finding technique can tabulate the same facts as efficiently.
Questionnaires can be categorized as follows:
1. Free formatted questionnaires.
2. Fixed formatted questionnaires.
Free formatted Questionnaires – offers the respondent greater latitude in the answer. A question is
asked and the respondent will provide the answer in the space provided after the question.
Fixed formatted questionnaires – contains questions that require selection of predefined answers.
1. Multiple choice questions (MCQ)
2. Rating questions.
3. Ranking questions
Advantages
Most questionnaires can be answered quickly. People can complete the questions at their
convenience.
This provides a relatively inexpensive mean of gathering information from a large number of
individuals performing the same task.
Questionnaires provide individuals to maintain anonymity.
Responses can be tabulated quickly.
Disadvantages
The number of respondents is often low.
There is no guarantee that individuals will answer all the questions.
Rewording of questions which are difficult to understand is not possible
It is not possible to observe the individuals body language.
Prototyping Technique
Prototypes can be very effective at gathering feedback. Low fidelity prototypes can be used as an
active listening tool. Often, when people cannot articulate a particular need in the abstract, they can
quickly assess if a design approach would address the need. Prototypes are most efficiently done
with quick sketches of interfaces and storyboards. Prototypes are even being used as the “official
requirements” in some situations.
Advantages
Allows users and developers to experiment with the software and develop an understanding
of how the system might work.
Serves as a training mechanism for users.
May minimize the time spent for fact-finding and help define more stable and reliable
requirements.
Disadvantages
Users may develop unrealistic expectations based on the performance, reliability, and
features of the prototype. Prototypes can only simulate system functionality and are
incomplete in nature. Care must be taken to educate the users of this fact and not to mislead
them.
Doing prototyping may extend the development schedule and increase the development
costs.
Brainstorming Techniques
Brainstorming is used in requirements elicitation to get as many ideas as possible from a group of
people. Generally used to identify possible solutions to problems, and clarify details of opportunities.
Brainstorming casts a wide net, identifying many different possibilities. Prioritization of those
possibilities is important to finding the needles in the haystack.
Document Analysis
Reviewing the documentation of an existing system can help when creating AS-IS process
documents, as well as driving gap analysis for scoping of migration projects. In an ideal world, we
would even be reviewing the requirements that drove creation of the existing system – a starting
point for documenting current requirements. Nuggets of information are often buried in existing
documents that help us ask questions as part of validating requirement completeness.
Focus Group
Interface Analysis
Interfaces for a software product can be human or machine. Integration with external systems and
devices is just another interface. User centric design approaches are very effective at making sure
that we create usable software. Interface analysis – reviewing the touch points with other external
systems – is important to make sure we don’t overlook requirements that aren’t immediately visible
to users.
Requirements Workshop
More commonly known as a joint application design (JAD) session, workshops can be very effective
for gathering requirements. More structured than a brainstorming session, involved parties
collaborate to document requirements. One way to capture the collaboration is with creation of
domain-model artifacts (like static diagrams, activity diagrams). A workshop will be more effective
with two analysts than with one, where a facilitator and a scribe work together.
Reverse Engineering
Is this a starting point or a last resort? When a migration project does not have access to sufficient
documentation of the existing system, reverse engineering will identify what the system does. It will
not identify what the system should do, and will not identify when the system does the wrong thing.
Survey
When collecting information from many people – too many to interview with budget and time
constraints – a survey or questionnaire can be used. The survey can force users to select from
choices, rate something (“Agree Strongly, Agree…”), or have open ended questions allowing free-
form responses. Survey design is hard – questions can bias the respondents. Don’t assume that you
can create a survey on your own, and get meaningful insight from the results. I would expect that a
well designed survey would provide qualitative guidance for characterizing the market. It should not
be used for prioritization of features or requirements.
Use cases
A use case is a technique for documenting the potential requirements of a new system or software
change. Each use case provides one or more scenarios that convey how the system should interact
with the end user or another system to achieve a specific business goal. Use cases typically avoid
technical jargon, preferring instead the language of the end user or domain expert. Use cases are
often co-authored by requirements engineers and stakeholders.
Use cases are deceptively simple tools for describing the behavior of software or systems. A use
case contains a textual description of all of the ways which the intended users could work with the
software or system. Use cases do not describe any internal workings of the system, nor do they
explain how that system will be implemented. They simply show the steps that a user follows to
perform a task. All the ways that users interact with a system can be described in this manner.
An object is defined as an instance of a “Class” (category). For example if we talk of two students
called “John” and “Smith” then they are said to be two instances of the “Student Class”. An object
consists of a structure that is has “Attributes” (data) and “Operations” (behavior). As object in a
student class it can have attributes such as Name, Address, Contact Number, etc and operations
such as Listen, Write, Speak, etc.
A Class is a template for making objects. We’ll take a computer as an example.
The computer class has the attributes brand name, serial number and speed, along with the
operations boot up, emit sound, and display. You can create new objects from this computer class.
Computer
brandName
serialNumber
speed
bootup()
emitsound()
display()
Purpose of object orientation is to develop software that reflects a particular slice of the world. The
more attributes and behaviors you take into account, the more your model will be in tune with reality.
Encapsulation
The TV hides its operations
from the person watching it
When an object carries out its operations, those
operations are hidden. This is the essence of
encapsulation.
When people watch television show, they usually
don’t know or care about the complex electronics that sit in back of the TV screen and all the many
operations that have to occur in order to paint the image in the screen. The TV does what it does and
hides the process from us. Most other appliances work this way. In the software world, encapsulation
helps cut down on the potential for bad things to happen. In a system that consists of objects, the
objects depend on each other in various ways. If one of them happens to malfunction and software
engineers have to change it in some way, hiding operations from other objects means that it
probably won’t be necessary to change those other objects.
An object hides what it does from other objects and from the outside world. For this reason
encapsulation is also called information hiding, but an object has to present a “face” to the outside
world so you can initiate those operations. Encapsulation in a wider context refers to binding of
attributes and operations within objects and hiding them from other objects or outside. The TV has a
set of buttons either in the TV itself or in the remote. These TV buttons are called interfaces.
Eno;
Name;
Address;
AddNew();
Delete();
Modify();
Manager Programmer
Division; ProjectAssigned;
Responsibilities; Language;
Display(); Display();
Salary(); Salary();
Abstraction
Abstraction is the process of filtering the relevant properties and operations of an object. Different
types of situations require different information for example: for an order processing system to
maintain a customer object the properties such as name, address would be recorded while
properties such as height, skin colour will not be required, where as in an immigration system these
properties will be required.
Therefore in any case, what you’re left with, after you’ve made your decision about what to include
and what to exclude is an abstraction.
Message Passing
In a system objects work together. They do by sending messages to one another. One object sends
another a message to perform an operation, and the receiving object performs that operation. When
you want to watch a TV show, you have the remote, settle in your chair, and push on the buttons.
The remote object sends a message to TV object to turn itself on. The TV object receives this
message, knows how to perform the turn-on operation, and turns itself on.
Most of the things you do from the remote, you can also do by getting out of the chair, going to the
TV, and clicking buttons on the TV. The Interface TV present to you is obviously not the same
interface it presents to the remote.
Polymorphism
Sometimes an operation has the same name in different classes. You can open a door, you can
open a window, or you can open a news paper. In each case you are performing a different
operation. In OO each class “knows” how that operation is supposed to take place. This is called
polymorphism.
Polymorphism is closely associated with inheritance. Assume that there are several sub classes
which inherits the same operation from the base class but uses their own method of implementing
the operation. Then when the particular operation is called each class demonstrates a different
unique behavior based on their implementation – polymorphism.
System Analysis and Design 52
Objects and their associations form the backbone of functioning systems. In order to model those
systems, you have to understand what those associations are. If you’re aware of the possible types
of associations, you’ll have a well-stocked bag of tricks when you talk to clients about their needs,
gather their requirements, and create models of the systems that help them meet their business
challenges.
The important thing is to use the concepts of OO to help you understand the client’s area of
knowledge, and to illustrate your understanding to the client in terms that he or she understands.
That’s where the UML comes in.
Goals of UML
The primary goals in the design of the UML were:
Provide users with a ready-to-use, expressive visual modeling language so they can develop
and exchange meaningful models.
Provide extensibility and specialization mechanisms to extend the core concepts.
Be independent of particular programming languages and development processes.
Provide a formal basis for understanding the modeling language.
Encourage the growth of the OO tools market.
Support higher-level development concepts such as collaborations, frameworks, patterns and
components.
Integrate best practices.
UML is a Language
A language provides a vocabulary and the rules for combining words in that vocabulary for the
purpose of communication. A modeling language is a language whose vocabulary and rules focus on
the conceptual and physical representation of a system. A modeling language such as UML is thus a
standard language for software blueprints.
The vocabulary and rules of a language such as the UML tell you how to create and read well-
formed models, but they don’t tell you what models you should create and when you should create
them.
There are some things about a software system you can’t understand unless you build models that
exceed the textual programming language. Something are best modeled textually, others are best
modeled graphically. The UML is such a graphical language.
The UML is more than just a bunch of graphical symbols. Rather, behind each symbol in the UML
notation is a well-defined semantics. In this manner, once developed can write a model in the UML,
and another developer, or even another tool, can interpret that model unambiguously.
The UML is not limited to modeling software. In fact, it is expressive enough to model non software
systems, such as workflow in the legal system, the structure and behavior of a patient healthcare
system, and the design of hardware.
UML Diagrams
UML diagrams represent two different views of a system model
Static (or structural) view: emphasizes the static structure of the system using objects,
attributes, operations and relationships. The structural view includes class diagrams and
composite structure diagrams.
Dynamic (or behavioral) view: emphasizes the dynamic behavior of the system by showing
collaborations among objects and changes to the internal states of objects. This view
includes sequence diagrams, activity diagrams and state machine diagrams.
UML 2.2 has 14 types of diagrams divided into two categories. Seven diagram types represent
structural information, and the other seven represent general types of behavior, including four that
represent different aspects of interactions. These diagrams can be categorized hierarchically as
Structure diagrams
Structure diagrams emphasize the things that must be present in the system being modeled. Since
structure diagrams represent the structure they are used extensively in documenting the architecture
of software systems.
Class diagram: describes the structure of a system by showing the system's classes, their
attributes, and the relationships among the classes.
Component diagram: describes how a software system is split up into components and
shows the dependencies among these components.
Composite structure diagram: describes the internal structure of a class and the
collaborations that this structure makes possible.
Deployment diagram: describes the hardware used in system implementations and the
execution environments and artifacts deployed on the hardware.
Object diagram: shows a complete or partial view of the structure of a modeled system at a
specific time.
Package diagram: describes how a system is split up into logical groupings by showing the
dependencies among these groupings.
Profile diagram: operates at the metamodel level to show stereotypes as classes with the
<<stereotype>> stereotype, and profiles as packages with the <<profile>> stereotype. The
extension relation (solid line with closed, filled arrowhead) indicates what metamodel element
a given stereotype is extending.
Behavior diagrams
Behavior diagrams emphasize what must happen in the system being modeled. Since behavior
diagrams illustrate the behavior of a system, they are used extensively to describe the functionality of
software systems.
Activity diagram: describes the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of control.
UML state machine diagram: describes the states and state transitions of the system.
Use case diagram: describes the functionality provided by a system in terms of actors, their
goals represented as use cases, and any dependencies among those use cases.
Interaction diagrams
Interaction diagrams, a subset of behavior diagrams, emphasize the flow of control and data among
the things in the system being modeled:
Communication diagram: shows the interactions between objects or parts in terms of
sequenced messages. They represent a combination of information taken from Class,
Sequence, and Use Case Diagrams describing both the static structure and dynamic
behavior of a system.
Interaction overview diagram: provides an overview in which the nodes represent interaction
diagrams.
Sequence diagram: shows how objects communicate with each other in terms of a sequence
of messages. Also indicates the life spans of objects relative to those messages.
Timing diagrams: Is a specific type of interaction diagram, where the focus is on timing
constraints.
Include
In one form of interaction, a given use case may include another. "Include is a
Directed Relationship between two use cases, implying that the behavior of
the included use case is inserted into the behavior of the including use case".
The first use case often depends on the outcome of the included use case. This is useful for
extracting truly common behaviors from multiple use cases into a single description. The notation is
a dashed arrow from the including to the included use case, with the label "«include»". This usage
resembles a macro expansion where the included use case behavior is placed inline in the base use
case behavior. There are no parameters or return values. To specify the location in a flow of events
in which the base use case includes the behavior of another, you simply write include followed by the
name of use case you want to include, as in the following flow for track order.
Extend
In another form of interaction, a given use case (the extension) may extend another. This
relationship indicates that the behavior of the extension use case may be inserted in the extended
use case under some conditions. The notation is a dashed arrow from the extension to the extended
use case, with the label "«extend»". The notes or constraints may be associated with this
relationship to illustrate the conditions under which this behavior will be executed.
Modelers use the «extend» relationship to indicate use cases that are "optional" to the base use
case. Depending on the modeler's approach "optional" may mean "potentially not executed with the
base use case" or it may mean "not required to achieve the base use case goal".
Generalization
In the third form of relationship among
use cases, a generalization/specialization
relationship exists. A given use case may
have common behaviors, requirements,
constraints, and assumptions with a more
general use case. In this case, describe
them once, and deal with it in the same
way, describing any differences in the
specialized cases. The notation is a solid
line ending in a hollow triangle drawn
from the specialized to the more general
use case (following the standard
generalization notation).
Relationships
A relationship is a general term covering the specific types of logical connections found on class and
object diagrams. UML shows the following relationships:
An association can be named, and the ends of an association can be adorned with role names,
ownership indicators, multiplicity, visibility, and other properties. There are five different types of
association. Bi-directional and uni-directional associations are the most common ones. For instance,
a flight class is associated with a plane class bi-directionally. Associations can only be shown on
class diagrams. Association represents the static relationship shared among the objects of two
classes. Example: "department offers courses", is an association relation.
Aggregation
Aggregation is a variant of the "has a" or association relationship; aggregation is more specific than
association. It is an association that represents a part-whole or part-of relationship. As a type of
association, an aggregation can be named and have the same adornments that an association can.
However, an aggregation may not involve more than two classes.
Aggregation can occur when a class is a collection or container of other classes, but where the
contained classes do not have a strong life cycle dependency on the container—essentially, if the
container is destroyed, its contents are not.
In UML, it is graphically represented as a hollow diamond shape on the containing class end of the
tree of lines that connect contained class(es) to the containing class.
Composition
Composition is a stronger variant of the "owns a" or association relationship; composition is more
specific than aggregation. It is represented with a solid diamond shape.
The UML graphical representation of a composition relationship is a filled diamond shape on the
containing class end of the tree of lines that connect contained class(es) to the containing class.
Generalization
The Generalization relationship indicates that one of the two related classes (the subtype) is
considered to be a specialized form of the other (the super type) and supertype is considered as
'Generalization' of subtype. In practice, this means that any instance of the subtype is also an
instance of the supertype. An exemplary tree of generalizations of this form is found in binomial
nomenclature: human beings are a subtype of simian, which are a subtype of mammal, and so on.
The UML graphical representation of a Generalization is a hollow triangle shape on the supertype
end of the line (or tree of lines) that connects it to one or more subtypes.
The generalization relationship is also known as the inheritance or "is a" relationship.
The supertype in the generalization relationship is also known as the "parent", superclass, base
class, or base type.
Realization
In UML modeling, a realization relationship is a relationship between two model elements, in which
one model element (the client) realizes (implements or executes) the behavior that the other model
element (the supplier) specifies. A realization is indicated by a dashed line with a unfilled arrowhead
towards the supplier.
Realizations can only be shown on class or component diagrams.
A realization is a relationship between classes, interfaces, components, and packages that connects
a client element with a supplier element. A realization relationship between classes and interfaces
and between components and interfaces shows that the class realizes the operations offered by the
interface.
Dependency
Dependency is a weaker form of relationship which
indicates that one class depends on another because it
uses it at some point of time. Dependency exists if a
class is a parameter variable or local variable of a
method of another class.
Multiplicity
The association relationship indicates that (at least) one of the two related classes makes reference
to the other. In contrast with the generalization relationship, this is most easily understood through
the phrase 'A has a B' (a mother cat has kittens, kittens have a mother cat).
The UML representation of an association is a line with an optional arrowhead indicating the role of
the object(s) in the relationship, and an optional notation at each end indicating the multiplicity of
instances of that entity (the number of objects that participate in the association).
Common multiplicities are:
0..1 No instances, or one instance (optional, may)
1 Exactly one instance
0..* or * Zero or more instances
1..* One or more instances (at least one)
1..3, 6, 9..* Mixed instances
The Class diagram for the system's Withdraw Money use case is illustrated below:
Components are wired together by using an assembly connector to connect the required interface of
one component with the provided interface of another component. This illustrates the service
consumer - service provider relationship between the two components.
An assembly connector is a "connector between two components that defines that one component
provides the services that another component requires. An assembly connector is a connector that is
defined from a required interface or port to a provided interface or port." [1]
When using a component diagram to show the internal structure of a component, the provided and
required interfaces of the encompassing component can delegate to the corresponding interfaces of
the contained components.
A delegation connector is a "connector that links the external contract of a component (as specified
by its ports) to the internal realization of that behavior by the component’s parts." [1]
The example above illustrates what a typical Insurance policy administration system might look like.
Each of the components depicted in the above diagram may have other component diagrams
illustrating their internal structure.
Component diagrams can also show the interfaces used by the components to communicate with
each other.
Notations Example
State diagrams have very few elements. The basic elements are
rounded boxes representing the state of the object and arrows indicting
the transition to the next state. The activity section of the state symbol
depicts what activities the object will be doing while it is in that state.
All state diagrams being with an initial state of the object. This is the
state of the object when it is created. After the initial state the object
begins changing states. Conditions based on the activities can
determine what the next state the object transitions to.
A long term goal of HCI is to design systems that minimize the barrier between the human's
cognitive model of what they want to accomplish and the computer's understanding of the user's
task.
Professional practitioners in HCI are usually designers concerned with the practical application of
design methodologies to real-world problems. Their work often revolves around designing graphical
user interfaces and web interfaces.
6.2 Usability of User Interfaces
In human-computer interaction and computer science, usability studies the elegance and clarity with
which the interaction with a computer program or a web site (web usability) is designed. Complex
computer systems are finding their way into everyday life, and at the same time the market is
becoming saturated with competing brands. This has led to usability becoming more popular and
widely recognized in recent years as companies see the benefits of researching and developing their
products with user-oriented instead of technology-oriented methods The term user friendly is often
used as a synonym for usable, though it may also refer to accessibility. Usability is also used to
describe the quality of user experience across websites, software, products and environments.
Usability is also very important in website development (web usability). According to Jakob Nielsen,
"Studies of user behavior on the Web find a low tolerance for difficult designs or slow sites. People
don't want to wait. And they don't want to learn how to use a home page. There's no such thing as a
training class or a manual for a Web site. People have to be able to grasp the functioning of the site
immediately after scanning the home page—for a few seconds at most." Otherwise, most casual
users will simply leave the site and continue browsing—or shopping—somewhere else.
When evaluating user interfaces for usability, the definition can be as simple as "the perception of a
target user of the effectiveness (fit for purpose) and efficiency (work or time required to use) of the
Interface". Each component may be measured subjectively against criteria e.g. Principles of User
Interface Design, to provide a metric, often expressed as a percentage.
Usability is an example of a non-functional requirement. As with other non-functional requirements,
usability cannot be directly measured but must be quantified by means of indirect measures or
attributes such as, for example, the number of reported problems with ease-of-use of a system.
Usability Considerations
Usability includes considerations such as:
Who are the users, what do they know, and what can they learn?
What do users want or need to do?
What is the general background of the users?
What is the context in which the user is working?
What has to be left to the machine?
Increased usability in the workplace fosters several responses from employees. Along with any
positive feedback, “workers who enjoy their work do it better, stay longer in the face of temptation,
and contribute ideas and enthusiasm to the evolution of enhanced productivity.]" In order to create
standards, companies often implement experimental design techniques that create baseline levels.
Areas of concern in an office environment include (though are not necessarily limited to):
Working Posture
Design of Workstation Furniture
Screen Displays
Input Devices
Organizational Issues
Office Environment
Software Interface
User guidance The interface should provide meaningful feedback when errors occur
and provide context – sensitive user help facilities.
User diversity The interface should provide appropriate interaction facilities for
different types of system users.
Application
#1
Application
#2
DBMS Database
Containing
Centralized
shared data
Application
#3
DBMS manages data resources like
an operating system manages
hardware resources
During the design stage of the database, there two main sections;
Logical and/or conceptual data design
Physical data design
The logical data design includes the concept of normalization, while the concept of conceptual data
design is done using entity relationship diagrams in the case of a relational database design. Further
the physical design of the database will be done using the concept of denormalization.
There is still an argument to which proceeds first is it ERD or the concept of normalization, well I
strongly feel that both should be done, for the purpose of the examination and mainly due to
simplicity I have taken ERD first and then I will discuss about the concept of normalization.
7.2 Entity Relationship Modeling
In software engineering, an entity-relationship model (ERM) is an abstract and conceptual
representation of data. Entity-relationship modeling is a database modeling method, used to produce
a type of conceptual schema or semantic data model of a system, often a relational database, and its
requirements in a top-down fashion. Diagrams created by this process are called entity-relationship
diagrams, ER diagrams, or ERDs.
Following are some of the important definitions which are available under ERD:
Entity - An instance of a physical object in the real world.
Entity Class - A group of objects of the same type.
Attributes (Properties) - Entities have attributes or properties that describe their characteristics.
Composite Attribute - An attribute that is composed of several more basic attributes.
A composite
attribute
Simple
identifier
The skills of the employee since it can have several values will be classified under multi-valued
attributes.
When considering on the aspect of relationships; it can be classified into two main sets, namely
degree and the cardinality. The degree of the relationship as specified earlier consists of the number
of entities which are connected to the relationship.
Cardinality of relationships
Following are the main cardinalities which are available:
One – to – One
– Each entity in the relationship will have exactly one related entity
One – to – Many
– An entity on one side of the relationship can have many related entities, but an entity
on the other side will have a maximum of one related entity
Many – to – Many
– Entities on both sides of the relationship can have many related entities on the other
side
Further when considering the cardinality constraints, mandatory and optionality will be two main
considerations:
Cardinality Constraints - the number of instances of one entity that can or must be associated
with each instance of another entity.
Minimum Cardinality
– If zero, then optional
– If one or more, then mandatory
Maximum Cardinality
– The maximum number
The following example indicates how the ERD will be represented using a entity instance diagram:
Strong entities
Exist independently of other types of entities
Has its own unique identifier
Represented with single-line rectangle
Weak entity
Dependent on a strong entity…cannot exist on its own
Does not have a unique identifier
Represented with double-line rectangle
Identifying relationship
Links strong entities to weak entities
Represented with double line diamond
Associative Entity
It’s an entity – it has attributes
AND it’s a relationship – it links entities together
When should a relationship with attributes instead be an associative entity?
– All relationships for the associative entity should be many
– The associative entity could have meaning independent of the other entities
– The associative entity preferably has a unique identifier, and should also have other
attributes
– The associative may be participating in other relationships other than the entities of
the associated
relationship
– Ternary relationships
should be converted
to associative entities
Associative entity involves a
rectangle with a diamond inside.
Note that the many-to-many
cardinality symbols face toward the
associative entity and not toward the
other entities.
The following indicates the pine valley furniture relational schema. All the examples are based on the
following example:
The above example indicates the mapping of a single entity into the corresponding relational
schema. Consider the following how it will map the composite attribute into the relation.
Let’s focus on mapping a many to many relationship. This will introduce an associative entity.
This is a process of decomposing a table or a relation into a set of logically related manageable
relations.
Consider the following relation to find out why we need to perform normalization:
As visible in the above table or relation there is several reasons why we should normalize the entire
relation. The reasons are:
Insertion anomaly
Deletion anomaly
Updation anomaly
Insertion anomaly
Consider the insertion of a new student into the file, this has no problem as long as the subject
details can have null values. Further insertion of a new subject without the insertion of the student
details will cause a problem since the NIC number which is the primary key cannot contain null
values. Therefore insertion anomaly will be caused only when an insertion occurs by ignoring the
primary key value.
Deletion anomaly
Just think that we are trying to delete the details of student ‘malik’, which will result in the deletion of
the subject details which is ‘IS’. But this will not cause a problem or anomaly since the subject details
of ‘IS’ is also available elsewhere. But an attempt to delete the details of ‘Ajith’ will cause a problem
since the ‘S3’ will not be
Updation anomaly
In the above example if we change the name of the subject ‘SD’ to ‘Programming’, then the change
must be made to all the instances of the specific name. this will cause an updation anomaly, further if
the subject of ‘TECH’ is changed to ‘networking and information technology’, then the change should
be done only in one place, this will not cause a updation anomaly. Therefore whenever there is
duplicate occurrences of the same record it will cause a updation anomaly while a single occurrence
of the same record will not because should a situation. Therefore this is the opposite of deletion
anomaly.
Normalization is a process therefore which will eliminate the above three problems; the levels of the
normalization is given in the next page.
Under the BCS diploma level our main ephasiz is the first three normal forms, which we should be
clearly aware of and should have the skill to decompose any given table to the set of normalized
tables.
Therefore to avoid this problem the above table should be normalized as follows:
Student Registration
NIC_no Sub_code Sub_name Lesson NIC_no Student Student
no. name add
753581405V S1 IS 2 753581405V Malik Colombo
832134351V S2 Tech 1 832134351V Saman Galle
832134351V S1 IS 3 811234738V Ajith Colombo
811234738V S3 SD 2
811234738V S1 IS 5
The decomposition is done by removing the repeating group attributes along with a copy of the
primary key of the main table into a new table. This is named as the registration table or relation.
The above multi value attribute can be removed by decomposing the table into two tables, which are
given below.
Employee Employee_skill
A table or a set of tables will be in second normal form if an only if it is in first normal form and there
is no partial functional dependency.
In the above table of student the 2nd normal form tables can be generated as follows:
Registration
Subject
NIC_no Sub_code Lesson
no. Sub_code Sub_name
753581405V S1 2
S1 IS
832134351V S2 1
S2 Tech
832134351V S1 3
S3 SD
811234738V S3 2
811234738V S1 5
A normalized design will often store different but related pieces of information in separate logical
tables (called relations). If these relations are stored physically as separate disk files, completing a
database query that draws information from several relations (a join operation) can be slow. If many
relations are joined, it may be prohibitively slow. There are two strategies for dealing with this. The
preferred method is to keep the logical design normalized, but allow the database management
system (DBMS) to store additional redundant information on disk to optimize query response. In this
case it is the DBMS software's responsibility to ensure that any redundant copies are kept
consistent. This method is often implemented in SQL as indexed views (Microsoft SQL Server) or
materialized views (Oracle). A view represents information in a format convenient for querying, and
the index ensures that queries against the view are optimized.
A denormalized data model is not the same as a data model that has not been normalized, and
denormalization should only take place after a satisfactory level of normalization has taken place and
that any required constraints and/or rules have been created to deal with the inherent anomalies in
the design.
For example, all the relations are in third normal form and any relations with join and multi-valued
dependencies are handled appropriately.
Denormalization techniques are often used to improve the scalability of Web applications