Software Engineering Lab
Software Engineering Lab
AIM:
RESULT:
Thus, To draw the Work Breakdown Structure for the system to be automated is completed successfully
and output is verified.
Experiment no 02
PERT/CPM charts
Date
AIM:
Schedule all the activities and sub-activities using the PERT/CPM charts
PERT is appropriate techniques which are used for the projects where the time required or needed to
complete different activities are not known. PERT is majorly applied for scheduling, organization and
integration of different tasks within a project. It provides the blueprint of project and is efficient
technique for project evaluation.
CPM is a technique which is used for the projects where the time needed for completion of project is
already known. It is majorly used for determining the approximate time within which a project can be
completed. Critical path is the largest path in project management which always provide minimum time
taken for completion of project.
Example: Determine the critical path for a small project consisting of ten activities has the following
characteristics:
Solution:
Network for the given project is drawn below:
As we can see there are two critical paths along which E-values and L-values are similar, but the
longest network of critical activities is known as critical path.
Example:
Draw the PERT network for the small project consisting of eight activities has the following
characteristics:
Solution: The network diagram for the given data is shown in fig. below. The earliest time and
variance of each activity is computed by using the formula.
Result:
Thus, Schedule all the activities and sub-activities using the PERT/CPM charts is completed
successfully and output is verified.
AIM:
To, define use cases and represent them in use-case document for all the stakeholders of the
system to be automated
DESCRIPTION:
In the Unified Modeling Language (UML), a use case diagram can summarize the details of your system's users
(also known as actors) and their interactions with the system. To build one, you'll use a set of specialized symbols
and connectors. An effective use case diagram can help your team discuss and represent:
• Scenarios in which your system or application interacts with people, organizations, or external systems
• Goals that your system or application helps those entities (known as actors) achieve
A use case diagram doesn't go into a lot of detail—for example, don't expect it to model the order in which steps are
performed. Instead, a proper use case diagram depicts a high-level overview of the relationship between use cases,
actors, and systems. Experts recommend that use case diagrams be used to supplement a more descriptive textual
use case.
UML is the modeling toolkit that you can use to build your diagrams. Use cases are represented with a labeled oval
shape. Stick figures represent actors in the process, and the actor's participation in the system is modeled with a line
between the actor and use case. To depict the system boundary, draw a box around the use case itself.
Actors: The users that interact with a system. An actor can be a person, an organization, or an outside system that
interacts with your application or system. They must be external objects that produce or consume data.
System: A specific sequence of actions and interactions between actors and the system. A system may also be
referred to as a scenario.
Goals: The end result of most use cases. A successful diagram should describe the activities and variants used to
reach the goal.
Use case diagram symbols and notation
The notation for a use case diagram is pretty straightforward and doesn't involve as many types of symbols as other
UML diagrams. You can use this guide to learn how to draw a use case diagram if you need a refresher. Here are all
the shapes you will be able to find in Lucidchart:
Use cases: Horizontally shaped ovals that represent the different uses that a user might have.
Actors: Stick figures that represent the people actually employing the use cases.
Associations: A line between actors and use cases. In complex diagrams, it is important to know which actors are
associated with which use cases.
System boundary boxes: A box that sets a system scope to use cases. All use cases outside the box would be
considered outside the scope of that system. For example, Psycho Killer is outside the scope of occupations in the
chainsaw example found below.
Packages: A UML shape that allows you to put different elements into groups. Just as with component diagrams,
these groupings are represented as file folders.
Thus, defining use cases and represents them in use-case document for all the stakeholders of the
system to be automated are completed successfully and output is verified.
Experiment no 04
Date RISK MITIGATION PLANNING, IMPLEMENTATION, AND PROGRESS
AIM:
To, Identify and analyze all the possible risks and its risk mitigation plan for the system to be
automated
RISK MITIGATION PLANNING, IMPLEMENTATION, AND PROGRESS
MONITORING
Definition: Risk mitigation planning is the process of developing options and actions to enhance
Opportunities and reduce threats to project objectives. Risk mitigation implementation is the
Process of executing risk mitigation actions. Risk mitigation progress monitoring includes Tracking
identified risks, identifying new risks, and evaluating risk process effectiveness throughout the project.
Keywords: risk, risk management, risk mitigation, risk mitigation implementation, risk Mitigation
planning, risk mitigation progress monitoring
MITRE SE Roles & Expectations: MITRE systems engineers (SEs) working on government programs
develop actionable risk mitigation strategies and monitoring metrics, monitor implementation of risk
mitigation plans to ensure successful project and program completion, collaborate with the government
team in conducting risk reviews across projects and programs, and analyze metrics to determine
ongoing risk status and identify serious risks to elevate to the sponsor or customer.
Background
Risk mitigation planning, implementation, and progress monitoring are depicted in Figure 1. As
part of an iterative process, the risk tracking tool is used to record the results of risk prioritization
analysis (step 3) that provides input to both risk mitigation (step 4) and risk impact assessment
(step 2).
The risk mitigation step involves development of mitigation plans designed to manage,
General guidelines for applying risk mitigation handling options are shown in Figure 2.
These options are based on the assessed combination of the probability of occurrence and severity of
the consequence for an identified risk. These guidelines are appropriate for many, but
Assume/Accept: Acknowledge the existence of a particular risk, and make a deliberate decision to
accept it without engaging in special efforts to control it. Approval of project or program leaders is
required.
Avoid: Adjust program requirements or constraints to eliminate or reduce the risk. This adjustment
could be accommodated by a change in funding, schedule, or technical requirements.
Watch/Monitor: Monitor the environment for changes that affect the nature and/or the impact of the
risk.
Each of these options requires developing a plan that is implemented and monitored for effectiveness.
More information on handling options is discussed under best practices and lessons learned below.
From a systems engineering perspective, common methods of risk reduction or mitigation with
identified program risks include the following, listed in order of increasing seriousness of the risk [4]:
When determining the method for risk mitigation, the MITRE SE can help the customer assess the
performance, schedule, and cost impacts of one mitigation strategy over another. For something like
"parallel" development mitigation, MITRE SEs could help the government determine whether the cost
could more than double, while time might not be extended by much
(e.g., double the cost for parallel effort, but also added cost for additional program office and user
engagement). For conducting rapid prototyping or changing operational requirements,
MITRE SEs can use knowledge in creating prototypes and using prototyping and experimenting
(see SE Guide article on Special Considerations for Conditions of Uncertainty: Prototyping and
Experimentation and the Requirements Engineering topic) for projecting the cost and time to conduct a
prototype to help mitigate particular risks (e.g., requirements). Implementing more engineering reviews
and special oversight and testing may require changes to contractual agreements. MITRE systems
engineers can help the government assess these (schedule and cost)by helping determine the basis of
estimates for additional contractor efforts and providing reality check for these estimates. MITRE's CASA
[Center for Acquisition and Systems Analysis] and the CCG [Center for Connected Government]
Investment Management practice department have experience and a knowledge base in many
development activities across a wide spectrum of methods and can help with realistic assessments of
mitigation alternatives.
For related information, refer to the other articles in this Risk Management topic area of the SE Guide.
Handling Options
Assume/Accept. Collaborate with the operational users to create a collective understanding of risks and
their implications. Risks can be characterized as impacting traditional cost, schedule, and performance
parameters. Risks should also be characterized as impact to mission performance resulting from reduced
technical performance or capability. Develop an understanding of all these impacts. Bringing users into
the mission impact characterization is particularly important to selecting which "assume/accept" option
is ultimately chosen. Users will decide whether accepting the consequences of a risk is acceptable.
Provide the users with the vulnerabilities affecting a risk, countermeasures that can be performed, and
residual risk that may occur. Help the users understand the costs in terms of time and money.
o Avoid. Again, work with users to achieve a collective understanding of the implications of risks. Provide
users with projections of schedule adjustments needed to reduce risk associated with technology
maturity or additional development to improve performance. Identify capabilities that will be delayed
and any impacts resulting from dependencies on other efforts. This information better enables users to
interpret the operational implications of an "avoid" option.
o Control. Help control risks by performing analyses of various mitigation options. For example, one
option is to use a commercially available capability instead of a contractor developed one. In developing
options for controlling risk in your program, seek out potential solutions from similar risk situations of
other MITRE customers, industry, and academia. When considering a solution from another
organization, take special care in assessing any architectural changes needed and their implications.
o Transfer. Reassigning accountability, responsibility, or authority for a risk area toanother organization
can be a double-edged sword. It may make sense when the risk involves a narrow specialized area of
expertise not normally found in program offices. But, transferring a risk to another organization can
result in dependencies and loss of control that may have their own complications. Position yourself and
your customer to consider a transfer option by acquiring and maintaining awareness of organizations
within your customer space that focus on specialized needs and their solutions. Acquire this awareness
as early in the program acquisition cycle as possible, when transfer options are more easily
implemented.
o Watch/Monitor. Once a risk has been identified and a plan put in place to manage it, there can be a
tendency to adopt a "heads down" attitude, particularly if the execution of the mitigation appears to be
operating on "cruise control." Resist that inclination. Periodically revisit the basic assumptions and
premises of the risk.
Scan the environment to see whether the situation has changed in a way that affects the nature or
impact of the risk. The risk may have changed sufficiently so that the current mitigation is ineffective and
needs to be scrapped in favor of a different one. On the other hand, the risk may have diminished in a
way that allows resources devoted to it to be redirected.
o Understand the users and their needs. The users/operational decision makers will be the decision
authority for accepting and avoiding risks. Maintain a close relationship with the user community
throughout the system engineering life cycle. Realize that mission accomplishment is paramount to the
user community and acceptance of residual risk should be firmly rooted in a mission decision.
o Seek out the experts and use them. Seek out the experts within and outside MITRE. MITRE's technical
centers exist to provide support in their specialty areas. They understand what's feasible, what's worked
and been implemented, what's easy, and what's hard. They have the knowledge and experience
essential to risk assessment in their area of expertise. Know our internal centers of excellence, cultivate
relationships with them, and know when and how to use them.
o Recognize risks that recur. Identify and maintain awareness of the risks that are "always there" —
interfaces, dependencies, changes in needs, environment and requirements, information security, and
gaps or holes in contractor and program office skill sets. Help create an acceptance by the government
that these risks will occur and recur and that plans for mitigation are needed up front. Recommend
various mitigation approaches —including adoption of an evolution strategy, prototyping,
experimentation, engagement with broader stakeholder community, and the like.
o Encourage risk taking. Given all that has been said in this article and its companions, this may appear
to be an odd piece of advice. The point is that there are consequences of not taking risks, some of which
may be negative. Help the customer and users understand that reality and the potential consequences
of being overly timid and not taking certain risks in your program. An example of a negative
consequence for not taking a risk when delivering a full capability is that an adversary might realize a
gain against our operational users. Risks are not defeats, but simply bump in the road that needs to be
anticipated and dealt with.
o Recognize opportunities. Help the government understands and see opportunities that may arise from
a risk. When considering alternatives for managing a particular risk, be sure to assess whether they
provide an opportunistic advantage by improving performance, capacity, flexibility, or desirable
attributes in other areas not directly associated with the risk.
o Encourage deliberate consideration of mitigation options. This piece of advice is good anytime, but
particularly when supporting a fast-paced, quick reaction government program that is juggling many
competing priorities. Carefully analyze mitigation options and encourage thorough discussion by the
program team. This is the form of the wisdom "go slow to go fast."
o Not all risks require mitigation plans. Risk events assessed as medium or high criticality should go into
risk mitigation planning and implementation. On the other hand, consider whether some low criticality
risks might just be tracked and monitored on a watch list. Husband your risk-related resources.
o Determine the appropriate risk manager. The risk manager is responsible for identifying and
implementing the risk mitigation plan. He or she must have the knowledge, authority, and resources to
implement the plan. Risk mitigation activities will not be effective without an engaged risk manager. It
may be necessary to engage higher levels in the customer organization to ensure the need for the risk
manager is addressed. This can be difficult and usually involves engaging more senior levels of the
MITRE team as well. o Develop a high-level mitigation strategy. This is an overall approach to reduce the
risk impact severity and/or probability of occurrence. It could affect a number of risks and include, for
example, increasing staffing or reducing scope.
o Identify actions and steps needed to implement the mitigation strategy. Ask these key questions:
o Make sure you have the right exit criteria for each. For example, appropriate decisions, agreements,
and actions resulting from a meeting would be required for exit, not merely the fact that the meeting
was held.
o Look for evaluation, proof, and validation of met criteria. Consider, for example, metrics or test events.
o Include only and all stakeholders relevant to the step, action, or decisions.
o Forward Planning: Determine the time needed to complete each action step and when the expected
completion date should be.
o Evaluate key decision points and determine when a move to a contingency plan should be taken.
Develop a contingency plan ("fall back, plan B") for any high risk.
o Are cues and triggers identified to activate contingency plans and risk reviews?
o Include decision point dates to move to fallback plans. The date to move must allow time to
Evaluate the status of each action. Determine when each action is expected to be completed
successfully.
Integrate plans into IMS and program management baselines. Risk plans are integral to the program,
not something apart from it.
Monitoring Risk
o Include risk monitoring as part of the program review and manage continuously. Monitoring risks
should be a standard part of program reviews. At the same time, risks should be managed continuously
rather than just before a program review.
o Review and track risk mitigation actions for progress. Determine when each action is expected to be
completed successfully.
o Revisit risk analysis as plans and actions are successfully completed. Are the risks burning down?
Evaluate impact to program critical path.
o Routinely reassess the program's risk exposure. Evaluate the current environment for new risks or
modification to existing risks.
RESULT:
Thus, to identify and analyze all the possible risks and its risk mitigation plan for the system to
be automated is completed successfully and output is verified.
Aim:
To diagnose any risk using Ishikawa Diagram (Can be called as Fish Bone Diagram or Cause& Effect
Diagram)
Description:
A cause and effect diagram, often called a “fishbone” diagram, can help in brainstorming to identify
possible causes of a problem and in sorting ideas into useful categories. A fishbone diagram is a visual
way to look at cause and effect. It is a more structured approach than some other tools available for
brainstorming causes of a problem (e.g., the Five Whys tool). The problem or effect is displayed at the
head or mouth of the fish. Possible contributing causes are listed on the smaller “bones” under various
cause categories. A fishbone diagram can be helpful in identifying possible causes for a problem that
might not otherwise be considered by directing the team to look at the categories and think of
alternative causes. Include team members who have personal knowledge of the processes and systems
involved in the problem or event to be investigated. Directions: The team using the fishbone diagram
tool should carry out the steps listed below.
● Agree on the problem statement (also referred to as the effect). This is written at the mouth of the
“fish.” Be as clear and specific as you can about the problem. Beware of defining the problem in terms of
a solution (e.g., we need more of something).
● Agree on the major categories of causes of the problem (written as branches from the main arrow).
Major categories often include: equipment or supply factors, environmental factors,
rules/policy/procedure factors, and people/staff factors.
● Brainstorm all the possible causes of the problem. Ask “Why does this happen?” As each idea is given,
the facilitator writes the causal factor as a branch from the appropriate category (places it on the
fishbone diagram). Causes can be written in several places if they relate to several categories.
● Again asks “Why does this happen?” about each cause. Write sub-causes branching off the cause
branches.
● Continues to ask “Why?” and generate deeper levels of causes and continue organizing them under
related causes or categories. This will help you to identify and then address root causes to prevent
future problems.
Tips:
● Use the fishbone diagram tool to keep the team focused on the causes of the problem, rather than the
symptoms.
● Consider drawing your fish on a flip chart or large dry erase board.
● Make sure to leave enough space between the major categories on the diagram so that you can add
minor detailed causes later.
● When you are brainstorming causes, consider having team members write each cause on sticky notes,
going around the group asking each person for one cause. Continue going through the rounds, getting
more causes, until all ideas are exhausted. How to Use the Fishbone Tool for Root Cause Analysis
Disclaimer: Use of this tool is not mandated by CMS, nor does its completion ensure regulatory
compliance.
● Encourage each person to participate in the brainstorming activity and to voice their own opinions.
● Note that the “five-whys” technique is often used in conjunction with the fishbone diagram – keep
asking why until you get to the root cause.
● To help identify the root causes from all the ideas generated, consider a multi-voting technique such
as having each team member identify the top three root causes. Ask each team member to place three
tally marks or colored sticky dots on the fishbone next to what they believe are the root causes that
could potentially be addressed.
Examples:
Here is an example of the start of a fishbone diagram that shows sample categories to consider, along
with some sample causes.
Here is an example of a completed fishbone diagram, showing information entered for each of the four
categories agreed upon by this team. Note, as each category is explored, teams may not always identify
problems in each of the categories.
● Background: the plan of care stipulated that the resident was to be transferred with two staff
members, or with one staff member using a sit-to-stand lift.
● Information from interviews: the resident was anxious and needing to use the bathroom urgently.
The aide was helping the resident transfer from her wheelchair to the toilet, without using a lift, and the
resident fell, sustaining an injury. The aide stated she did not use the lift because the battery was being
recharged, and there was no extra battery available. The aide stated she understood that the resident
could be transferred with assist of one.
● Disclaimer: Use of this tool is not mandated by CMS, nor does its completion ensure regulatory
compliance. With this information, the team proceeded to use the fishbone diagram to better
understand the causes of the event.
The value of using the fishbone diagram is to dig deeper, to go beyond the initial incident report, to
better understand what in the organization’s systems and processes are causing the problem, so they
can be addressed.
● There is no process in place to ensure that every lift in the building always has a working battery. (One
battery for the lift on this unit is no longer working, and the other battery was being recharged.)
● There is no process in place to ensure timely communication of new care information to the aides.
(New transfer information had not yet been conveyed to the aide. The aide’s “care card” still indicated
transfer with assist of one for this resident.) The root causes of the event are the underlying process and
system problems that allowed the contributing factors to culminate in a harmful event. As this example
illustrates, there can be more than one root cause. Once you have identified root causes and
contributing factors, you will then need to address each root cause and contributing factor as
appropriate. For additional guidance on following up on your fishbone diagram findings, see the
Guidance for Performing RCA with Performance Improvement Projects tool.
Result:
Thus, to diagnose any risk using Ishikawa Diagram (Can be called as Fish Bone Diagram or Cause& Effect
Diagram) is completed successfully and output is verified.
Aim:
To, define Complete Project plan for the system to be automated using Microsoft Project Tool.
Description:
When working with MS Project you either specify a start date or a finish date. Because once you enter
one of the two, and other project tasks, constraints and dependencies, MS Project will calculate the
other date. It is always a good practice to use a start date even if you know the deadline for the project.
Launch MS Project
Windows 7 − Click on Start menu, point to All Programs, click Microsoft Office, and then click Project
2013.
Windows 10 − Click on Start menu → All apps → Microsoft Office → Project 2013.
MS Project 2013 will display a list of options. In the list of available templates, click Blank Project.
Project sets the plan’s start date to current date, a thin green vertical line in the chart portion of the
Let us change the project start date and add some more information.
A dialog box appears. In the start date box, type 11/5/15, or click the down arrow to display the
calendar, select November 5, 2015 (or any date of your choice).
Click the arrow on the Current Date dropdown box. A list appears containing three base calendars.
Night Shift − Covers 11 PM to 8 AM, night shifts covering all nights from Monday to Friday, with one
hour breaks.
Standard − Regular working hours, Monday to Friday between 8 AM to 5 PM, with one hour breaks.
Select a Standard Calendar as your project Calendar. Click “Cancel” or “OK” to close the dialog box. Now
let us add exceptions.
Exceptions are used to modify a Project calendar to have a non-standard workday or a non-working day.
You can also allot unique working hours for a particular resource as well.
Here is an example to create a non-working day, which could be because of a holiday or office
celebrations or events other than the standard office work effort.
Change Working Time dialog box appears. Under Exceptions Tab click on the Name Field, enter event as
“Office Anniversary”. In the Start field enter 11/22/15, and then enter the same date in the Finish field.
This date is now scheduled as a non-working day for the project. You can also verify the changed color
indicated in the calendar within the dialog box as below. Click Ok to close.
Step 4: Setting up Resource Calendar
Just like you can change a Standard Base Calendar, you can change the work and non-working time for
each resource. You can modify the resource calendar to accommodate flex-time, vacation time, training
time, etc.
Also remember, Resource Calendar can only be applied to work resources and not to material and cost
resources.
By default when we create the resources in a plan, the resource calendar matches the Standard base
calendar. And any changes you make to the Project Calendar, gets reflected automatically in resource
calendars, except when you create an exception in the resource calendar. In that case even if you
update the project calendar, the exception in resource calendar is not affected.
Click Project tab → Properties group → Click Change Working Time Change Working Time dialog box
appears.
Click the down arrow for the “For Calendar” drop-down box.
Select the resource for whom you want to create an exception. In example below I hav
Under Exceptions Tab click on the Name Field, enter event as “Personal holiday”. In the Start field enter
the date (example 9/15/2015), and then enter the same date in the Finish field.
Step 5: Change Working times for Each Resource
Click Project tab → Properties group → Click Change Working Time. The Change Working Time dialog
box appears.
Click the down arrow for the “For Calendar” dropdown box.
Select the resource for whom you want to change work schedule.
In the following screen you can see we have chosen John. Click “Work Weeks” tab.
Under “Selected Day(s)” choose any day you want to change the work schedule.
Click Set day(s) to these specific working times. Change the time.
Click Project tab → Properties group → Click Change Working Time. The Change Working Time dialog
box appears.
Click the down arrow for the “For Calendar” dropdown box.
Select the resource for whom you want to change work schedule. We have chosen John a
Click “Work Weeks” tab.
Under “Selected Day(s)” choose any day you want to change the work schedule.
Click any day (we have chosen Friday) and use the radio button “Set days to nonworki
Click OK to close the Dialog box. You will now see all Fridays are greyed out in the
With Microsoft Windows Operating system, right clicking a file and selecting “Properties” brings up the
file properties dialog box that contains version, security and other file details. You can record some top
level information for your .mpp project file as well. This can be done as follows −
Windows 7 − Click on Start menu, point to All Programs, click Microsoft Office, and then click Project
2013.
Windows 8 − On the Start screen, tap or click Project 2013.
Windows 10 − Click on Start menu → All apps → Microsoft Office → Project 2013.
Click File Tab. Under Info Tab go to Project Information. Click arrow near Project Information to click
Advanced Properties. A dialog box opens, you can type in the changes as required. Click OK and don’t
forget to save by clicking on Save.
Before we start, let us assume you already have a Work Breakdown Structure (WBS). In context of WBS,
“Work” refers to “Deliverables” and not effort.
WBS identifies the deliverable at the lowest level as work package. This work package is decomposed
into smaller tasks/activities, which is the effort necessary to complete the work package. So a task is
action-oriented, and the work package is the deliverable or a result of one or more tasks being
performed.
There is a significant amount of confusion between what constitutes an activity and what constitutes a
task within the project management community. But for MS Project, a task is the effort and action
required to produce a particular project deliverable. MS Project does not use the term “activity”.
Enter Task
This is simple. In Gantt Chart View, just click a cell directly below the Task Name column. Enter the task
name. In the following screen, we have entered 5 different tasks.
Enter Duration
A duration of the task is the estimated amount of time it will take to complete a task. As a project
manager you can estimate a task duration using expert judgment, historical information, analogous
estimates or parametric estimates.
You can enter task duration in terms of different dimensional units of time, namely minutes, hours, days,
weeks, and months. You can use abbreviations for simplicity and ease as shown in the following table.
Remember, Project default values depend on your work hours. So 1 day is not equivalent to 24 hours
but has 8 hours of work for the day. Of course, you can change these defaults anytime you want.
Click Project tab → Properties Group → Click Change Working Time → Click Options. You can apply this
to all projects or a specific project that you are working on currently.
One of the neat tricks MS Project possesses is, it considers duration of the task in workday sense. So if
you have a non-working day in between, it accommodates this and ensures a task that takes 16 hours to
complete to end on the 3rd day. In other words, if you have a task that needs 16 hours to complete
starting on Monday 8:00 AM (if this is the time your work day starts, and 8 hours being total work hours
in a day), and Tuesday being a holiday, the task will logically end on the evening of Wednesday.
Tip − With manually scheduled tasks, if you are not sure about a task duration, you can just enter text
such as “Check with Manager/Engineer” to come back to this later.
This is simple in Gantt Chart View, click the cell below Duration column heading. Enter the duration.
(Task 1 in the following screenshot)
You can also enter Start and Finish date and MS Project will calculate the duration on its own. (Task 2 in
the following screenshot)
You can enter text as well when you don’t have a duration metric currently. (Task 3 and Task 4 in the
following screenshot)
Note − In the above screenshot, Task 6 is scheduled to start on Sunday, which is a nonworking day and
ends on Wednesday. So essentially, one would believe that with these 3 days Monday, Tuesday,
Wednesday, the duration calculated would be 3 days. But MS Project 2013 calculates it as 4 days. So one
needs to be careful when choosing the start date of the task. Because for any successive operation, MS
Project 2013 considers that Task 6 will take 4 days. The next time, you change the start date, the Finish
date changes to reflect this 4-day duration.
Elapsed Duration
Elapsed Duration is the time that elapses while some event is occurring which does not require any
resources. Elapsed duration for a task can be used in instances where a task will go on round-the- clock
without any stoppage. A normal workday has 8 hours, and an elapsed day duration will have 24 hours.
The task also continues over non-working (holidays and vacations) and working days.
You can enter elapsed duration by preceding any duration abbreviation with an “e”. So 1ew is seven 24-
hour days.
For example, when you are ‘Waiting for the paint to dry’. And it takes 4 days for this to happen. It does
not need a resource or a work effort, and all you are doing is waiting for it to dry. You can use 4ed as the
time duration, which signifies 4 elapsed days, the paint can dry regardless of whether it is a weekend or
if it falls on a holiday. Here in this example, the drying occurs over 24 hours over the weekend.
Create Milestones
In Project Management, Milestones are specific points in a project timeline. They are used as major
progress points to manage project success and stakeholder expectations. They are primarily used for
review, inputs and budgets.
Mathematically, a milestone is a task of zero duration. And they can be put where there is a logical
conclusion of a phase of work, or at deadlines imposed by the project plan.
MS Project names the new task as <New Milestone> with zero-day duration.
Click on <New Milestone> to change its name.
You can see the milestone appear with a rhombus symbol in the Gantt Chart View on the right.
Click on any particular task or type in a new task under the Task Name Heading. Under Duration heading
type in “0 days “.
In Method 2, a task was converted to a Milestone of Zero duration. But one can also convert a task of
non-zero duration into a Milestone. This is rarely used and causes confusion.
In Gantt Chart View → Format Tab → Show/Hide → click to check Project Summary Task on.
There can be a huge number of tasks in a project schedule, it is therefore a good idea to have a bunch of
related tasks rolled up into a Summary Task to help you organize the plan in a better way. It helps you
organize your plan into phases.
In MS Project 2013, you can have several number of sub-tasks under any higher level task. These higher
level tasks are called Summary Task. At an even higher level, they are called Phases. The highest level of
a plan’s outline structure is called the Project Summary Task, which encompasses the entire project
schedule.
Remember because summary task is not a separate task entity but a phase of the project with several
sub-tasks in it, the duration of the summary task is from the start of the first sub-task to the finish of the
last sub-task. This will be automatically calculated by MS Project.
Of course, you can enter a manual duration of the summary task as well which could be different from
the automatically calculated duration. MS Project will keep track of both but this can cause significant
confusion.
In most cases, you should ensure that there is no manually entered duration for any task you will be
using as a Summary Task.
Let us use the following screenshot as an example. If you would like to group Task 4 and Task 5 into a
Summary Task 1. You can do it in two ways.
Method 1
Method 2
Link Tasks
Once you have a list of tasks ready to accomplish your project objectives, you need to link them with
their task relationships called dependencies. For example, Task 2 can start once Task 1 has finished.
These dependencies are called Links. A Guide to the Project Management Body of Knowledge (PMBOK
Guide) does not define the term dependency, but refers to it as a logical relationship, which in turn is
defined as a dependency between two activities, or between an activity and a milestone.
In MS Project, the first task is called a predecessor because it precedes tasks that depend on it. The
following task is called the successor because it succeeds, or follows tasks on which it is dependent. Any
task can be a predecessor for one or more successor tasks. Likewise, any task can be a successor to one
or more predecessor tasks.
There are only four types of task dependencies, here we present them with examples.
Finish to Start (FS) − Finish the first floor before starting to build the second floor. Most used.
Finish to Finish (FF) − Cooking all dishes for dinner to finish on time.
Start To Start (SS) − When doing a survey, we would seek survey responses but will also start tabulating
the responses. One does not have to finish collecting survey response before starting the tabulation.
Start to Finish (SF) − Exam preparation will end when exam begins. Least used.
Gantt Chart − In Gantt Chart and Network Diagram views, task relationships appear as the links
connecting tasks.
Tables − In Tables, task ID numbers of predecessor task appear in the predecessor fields of successor
tasks.
Method 1
Select the two tasks you want to link. In the following screenshot taken as an example, we have selected
names, Task 1 and Task 2.
Note − Task 2 will have a Start date of the Next working day from Finish date of Task 1.
Method 2
In the Table, click the empty cell below Task Name column.
A drop down box appears with all Tasks defined in the project.
Method 3
In this method, you will select a group of task, and link them all with Finish-to-Start relationship.
Select multiple tasks with the help of the mouse → Task tab → Schedule group → Link the Selected
Tasks.
All tasks get linked. To select non-adjacent tasks, hold down Ctrl key and select each task separately.
Respect Links
If you are in Manually Scheduled mode, any change in duration of the predecessor task will not reflect
on Start date of Task 4. For example, Task 4 starts on 9/3/15 which is the next day of Finish date of Task
3.
Now when we change the Duration of Task 3 from 5 to 7 days, the start date is not automatically
updated for Task 4 in Manual Scheduling.
You can force MS Project to respect the link (dependency) by doing the following − Select Task 4.
MS Project by default sets new tasks to be manually scheduled. Scheduling is controlled in two ways.
Manual Scheduling − This is done to quickly capture some details without actually scheduling the tasks.
You can leave out details for some of the tasks with respect to duration, start and finish dates, if you
don’t know them yet.
Automatic Scheduling − This uses the Scheduling engine in MS Project. It calculates values such as task
durations, start dates, and finish dates automatically. It takes into accounts all constraints, links and
calendars.
For example, at Lucerne Publishing, the new book launch plan has been reviewed by the resources who
will carry out the work and by other project stakeholders. Although you expect the plan to change
somewhat as you learn more about the book launch, you now have enough confidence in the overall
plan to switch from manual to automatic task scheduling.
Method 1
If you want to change the mode for a particular task, say Task 5 in the following example. Click on Task
Mode cell in the same row. Then, click the down arrow to open a dropdown box, you can select Auto
Scheduled.
Method 2
To switch completely to Auto Schedule mode −Toggle the scheduling mode of the plan by clicking the
New Tasks status bar (at the bottom-left) and then selecting Auto scheduling mode.
You can also change the default scheduling mode that Project applies to all new plans.
Go to File tab and click Options. Then click Schedule tab and under scheduling options for this project
select “All New Projects” from the dropdown box. Under new tasks created, select “Auto Scheduled”
from the dropdown box.
Result:
Thus ,to define Complete Project plan for the system to be automated using Microsoft Project Tool is
completed successfully and output is verified.
Experiment no 07
vision document
Date
Aim:
To, define the Features, Vision, Business objectives, Business rules and stakeholders in the vision
document
Vision document
A vision document defines the high-level scope and purpose of a program, product, or project. A clear
statement of the problem, proposed solution, and the high-level features of a product helps establish
expectations and reduce risks. This topic provides an outline of potential content for a vision document.
See Developing a vision for an explanation of how the product owner or business analyst works with
stakeholders to develop a vision document. That topic, which is part of the IBM® Engineering Lifecycle
Management (ELM) scenario guidance, describes the vision-development process. This topic outlines
typical content for the document. You can copy this outline, paste it into a new document, and use it as
the basis for your vision document. Use those portions of this outline that are relevant for your project.
When a team uses the Requirements Management (RM) capability in the ELM, the vision document can
be expressed in one or more rich-text documents or modules. You can embed requirements and related
artifacts in rich-text documents or use the numbered hierarchical structure of a module to organize
content. Team members can set attributes, such as priority and status, on each artifact and create trace
links between related documents, modules, and individual artifacts.
1: Introduction
This introduction provides an overview of the entire vision document. It includes the purpose, scope,
definitions, acronyms, abbreviations, references, and an overview of the full document.
1.2 Scope: Briefly describe the scope of this vision document, including which programs, projects,
applications, and business processes the document is associated with. Include anything else that this
document affects or influences.
1.3 Definitions, acronyms and abbreviations: Define all terms, acronyms, and abbreviations that
are required to interpret the vision correctly. This information might be provided by reference to the
project glossary, which can be developed online in the RM repository.
1.4 References: List all documents that the vision document refers to. Identify each document by
title, report number (if applicable), date, and publishing organization. Specify the sources from which
readers can obtain the references; the sources are ideally available in RM or in other online repositories.
This information might be provided by reference to an appendix or to another document.
1.5 Overview: Describe the vision-document contents and explain how the document is organized.
2: Positioning
2.1 Business opportunity: Briefly describe the business opportunity that is addressed by this
project.
2.2 Problem statement: Summarize the problem that this project solves. Use the following
statements as a model, providing project details to replace the parenthetical elements:
The problem of (describe the problem) affects (the stakeholders affected by the problem). The impact of
the problem is (what is the impact of the problem). A successful solution would include (list some key
benefits of a successful solution).
2.3 Product position statement: Provide an overall statement that summarizes at the highest level
the unique position the product intends to take in the marketplace. Use the following statements as a
model, providing project details to replace the parenthetical elements:
For the (target customer), who (statement of the need or opportunity). The (product name) is a (product
category) that (statement of key benefit, that is, the compelling reason to buy). Unlike (primary
competitive alternative), our product (statement of primary differentiation).
A product position statement communicates the intent of the application and the importance of the
project to all concerned stakeholders.
To provide products and services that meet stakeholders' and users' needs, you must identify and
involve all stakeholders as part of the requirements-definition process. You must also identify the
system users and ensure that the stakeholder community represents them adequately.
This section provides a profile of the stakeholders and users who are involved in the project. This section
also identifies the key problems that stakeholders and users consider that the proposed solution must
address. This section does not describe specific requests or requirements; a separate stakeholder
requests artifact captures these items. The key-problem description provides the background and
justification for requirements.
3.1 Market demographics: Summarize the key market demographics that motivate your product
decisions. Describe and position target market segments. Estimate the market size and growth by using
the number of potential users. Alternatively, estimate the amount of money that your customers spend
trying to meet the needs that your product or enhancement would fulfill. Review major industry trends
and technologies. Answer these strategic questions:
3.2 Stakeholder summary: List all the identified stakeholders. For each stakeholder type, provide
this information:
• Represents: Briefly describe which individuals, teams, or organizations this stakeholder type
represents.
• Role: Briefly describe the role this stakeholder type plays in the development effort.
3.3 User summary: List all the identified user types. For each user type, provide this information:
• Name: Name the user type
• Description: Briefly describe the relationship of this type of user to the system under
development.
3.4 User environment: Detail the working environment of the target user. Here are some
suggestions:
• How many people are involved in completing the task? Is this changing?
• How long is a task cycle? How much time do users spend in each activity? Is this changing?
• What unique environmental constraints affect the project? For example, do users require
mobile devices, work outdoors, or work during flights?
• Which system platforms are in use today? Are there future platforms planned?
• What other applications are in use? Does your application need to integrate with them?
In this section, you might include extracts from the business model to outline the task and workers who
are involved.
3.5 Stakeholder profiles: Describe each stakeholder in the project by completing the following table
for each stakeholder. Remember: Stakeholder types can be users, strategy departments, legal or
compliance departments, technical developers, operations teams, and others. A thorough profile covers
the following topics for each stakeholder type:
• Representative: State who represents the stakeholder to the project (This information is
optional if it is documented elsewhere.) Enter the representatives' names.
• Type: Qualify the expertise of the stakeholder, such as “guru”, “business expert” , or “casual
user.” This designation can suggest technical background and degree of sophistication.
• Responsibilities: List the key responsibilities of the stakeholder on the system under
development; list their interests as a stakeholder.
• Success criteria: State how the stakeholder defines success. How is the stakeholder rewarded?
• Involvement - Describe how the stakeholder is involved in the project. Where possible, relate
the involvement to the process roles; for example, a stakeholder might be a requirements reviewer.
• Deliverables: Identify additional deliverables that the stakeholder requires. These items might
be project deliverables or output from the system under development.
• Comments or issues: State problems that interfere with success and any other relevant
information.
3.6 User profiles: Describe each user of the system here by completing the following table for each
user type. Remember user types can be experts and novices; for example, an expert might need a
sophisticated, flexible tool with cross-platform support, while a novice might need a tool that is easy to
use. A thorough profile covers these topics for each type of user:
• Representative: State who represents the user to the project. (This information is optional if it is
documented elsewhere.) This representative often refers to the stakeholder who represents a set of
users; for example, Stakeholder: Stakeholder1.
• Type: Qualify the expertise of the user, such as “guru” or “casual user”. This designation can
suggest technical background and degree of sophistication.
• Responsibilities: List the key user responsibilities with respect to the system; for example, state
who captures customer details, produces reports, and coordinates work, and so on.
• Success criteria: State how the user defines success. How is the user rewarded?
• Involvement: Describe how the user is involved in the project. Where possible, relate the
involvement to process roles; for example, a stakeholder might be a requirements reviewer.
• Deliverables: Identify the deliverables that the user produces and for whom.
• Comments or issues: State problems that interfere with success and any other relevant
information. Describe trends that make the user's job easier or harder.
3.7 Key stakeholder or user needs: List the key problems with existing solutions as the stakeholder
perceives them. Clarify these issues for each problem:
You must understand the relative importance that the stakeholder places on solving each problem.
Ranking and cumulative voting techniques help indicate the problems that must be solved versus issues
that stakeholders would like to be addressed. Use this table to capture the stakeholder needs.
3.8 Alternatives and competition: Identify alternatives that the stakeholder perceives as available.
These alternatives can include buying a competitor's product, building a homegrown solution, or
maintaining the status quo. List any known and available competitive choices. Include the major
strengths and weaknesses of each competitor as the stakeholder perceives them.
4: Product overview
This section provides a high-level view of the product capabilities, interfaces to other applications, and
systems configurations. This section typically consists of three subsections:
• Product perspective
• Product functions
4.1 Product perspective: Put the product in perspective with regards to other related products and
the user's environment. If the product is independent and completely self-contained, state it here. If the
product is a component of a larger system, relate how these systems interact and identify the relevant
interfaces between the systems. One way to display the major components of the larger system,
interconnections, and external interfaces is to use a business process or use case diagram.
4.2 Summary of capabilities: Summarize the major benefits and features that the product will
provide. For example, a customer support system might use this part to address problem
documentation, routing, and status reporting without elaborating on detail that these functions require.
Organize the functions so that the list is understandable to the customer or to anyone else who reads
the document for the first time.
4.3 Assumptions and dependencies: List each of factor that affects the features that the vision
document includes. List assumptions that, if changed, will alter the vision document. For example, an
assumption might state that a specific operating system will be available for the designated hardware
for the software product. If the operating system is not available, the vision document will require
change.
4.4 Cost and pricing: Record relevant cost and pricing impacts and constraints. For example,
distribution costs (the number of CDs and CD mastering) or other cost-of-goods-sold constraints
(manuals and packaging) might be material or irrelevant to project success, depending on the nature of
the application.
4.5 Licensing and installation: Licensing and installation issues can also directly affect the
development effort. For example, the need to support serializing, password security, or network
licensing will create additional system requirements that must be considered in the development effort.
Installation requirements might also affect coding, or create the need for separate installation software.
5: Product features
List and briefly describe the product features. Features are the high-level capabilities of the system that
are required to deliver benefits to the users. Each feature is a requested service that typically requires a
series of inputs to achieve a satisfactory result. For example, a feature of
a problem-tracking system might be the ability to provide trending reports. As the use case model takes
shape, update the description to refer to the use cases.
Because the vision document is reviewed by a wide variety of involved personnel, keep the level of
detail general enough for everyone to understand. However, offer sufficient detail to provide the team
with the information it needs to create a use case model or other design documents.
To manage application complexity, for a new system or an incremental change, list capabilities at such a
high level that you include approximately 25-99 features. These features provide the basis for product
definition, scope management, and project management. Each feature will be expanded into greater
detail in the use case model.
Throughout this section, make each feature relevant to users, operators, or other external systems.
Include a description of functions and usability issues that must be addressed. The following guidelines
apply:
• Avoid design. Keep feature descriptions at a general level. Focus on required capabilities and
why (not how) they should be implemented.
• Designate all features as requirements of a specific feature type for easy reference and tracking.
6:Constraints
Note any design constraints, external constraints, such as operational or regulatory requirements, or
other dependencies.
7: Quality ranges
Define the quality ranges for performance, robustness, fault tolerance, usability, and similar
characteristics that the feature set does not describe.
9.2 System requirements: Define the system requirements for the application. These can include
the supported host operating systems and network platforms, configurations, memory, peripheral
devices, and companion software.
9.3 Performance requirements: Detail performance requirements. Performance issues can include
such items as user-load factors, bandwidth or communication capacity, throughput, accuracy, reliability,
or response times under various load conditions.
This section describes the documentation that you must develop to support successful application
deployment.
10.1 Release notes, read me file: Release notes or an abbreviated read me file can include a "What's
new" section, a discussion of compatibility issues with earlier releases, and installation and upgrade
alerts. The document can also contain or link to fixes in the release and any known problems and
workarounds.
10.2 Online help: Many applications provide an online help system to assist the user. The nature of
these systems is unique to application development as they combine aspects of programming
(searchable information and web-like navigation) with aspects of technical writing (organization and
presentation). Many teams find that developing an online help system is a project within a project that
benefits from scope management and planning at the project outset.
10.3 Installation guides: A document that includes installation, configuration, and upgrade
instructions is part of offering a full solution.
10.4 Labeling and packaging: A consistent look and feel begins with product packaging and applies to
installation menus, splash screens, help systems, GUI dialog boxes, and so on. This section defines the
needs and types of labeling to be incorporated in the code. Examples include copyright and patent
notices, corporate logos, standardized icons, and other graphic elements.
Result:
Thus, to define the Features, Vision, Business objectives, Business rules and stakeholders in the vision
document is completed successfully and output is verified.
Aim:
To, define the functional and non-functional requirements of the system to be automated by using Use
cases and document in SRS document.
Description:
Functional requirements in software engineering help you to capture the intended behavior of the
system. This behavior may be expressed as functions, services or tasks or which system is required to
perform.
A non-functional requirement defines the quality attribute of a software system. They represent a set of
standards used to judge the specific operation of a system. Example, how fast does the website load?
A non-functional requirement is essential to ensure the usability and effectiveness of the entire software
system. Failing to meet non-functional requirements can result in systems that fail to satisfy user needs.
Non-functional Requirements allows you to impose constraints or restrictions on the design of the
system across the various agile backlogs. Example, the site should load in 3 seconds when the number of
simultaneous users are > 10000. Description of non-functional requirements is just as critical as a
functional requirement.
• The software automatically validates customers against the ABC Contact Management System
• The background color for all windows in the application will be blue and have a hexadecimal
RGB color value of 0x0000FF.
• Only Managerial level employees have the right to view revenue data.
1. Users must change the initially assigned login password immediately after the first successful
login. Moreover, the initial should never be reused.
2. Employees never allowed to update their salary information. Such attempt should be reported
to the security administrator.
3. Every unsuccessful attempt by a user to access an item of data shall be recorded on an audit
trail.
4. A website should be capable enough to handle 20 million users with affecting its performance
5. The software should be portable. So moving from one OS to other OS does not create any
problem.
6. Privacy of information, the export of restricted technologies, intellectual property rights, etc.
should be audited.
Below is the main difference between functional and non functional requirements in software
engineering:
• Helps you to check whether the application is providing all the functionalities that were
mentioned in the functional requirement of that application
• A functional requirement document helps you to define the functionality of a system or one of
its subsystems.
• Errors caught in the Functional requirement gathering stage are the cheapest to fix.
• Functional requirement can be expressed in Use Case form or user story as they exhibit
externally visible functional behavior.
• The nonfunctional requirements ensure the software system follow legal and compliance rules.
• They ensure the reliability, availability, and performance of the software system
• They ensure good user experience and ease of operating the software.
• They help in formulating security policy of the software system.
Is there an analysis available that compares SRS to a use case approach to capture requirements?
This is a great question, because there is confusion regarding these two widely used formats, and I don't
think I've ever seen such a comparison. There ought to be one; and my apologies if there already is one,
but it probably wouldn't make some of the same points that I feel are important. So here goes.
Many organizations use one or the other, but not both, to capture requirements. However, some
organizations do use both; and some authors say that use cases are derived from a software
requirements specification (SRS), whereas other authors say just the opposite.
An SRS provides a consistent structured format consisting of a number of separate sections for capturing
a wide range of information about your requirements. The SRS starts with an introduction, which
includes sections for purpose, scope, definitions, references, and an overview. Next the SRS has a
general description, which includes sections describing product perspective, product functions, user
characteristics, general constraints, and assumptions and dependencies. There may also be appendices
and an index.
The heart of the SRS is the set of specific requirements. For each functional requirement ("The system
shall…"), it is suggested to include an introduction plus sections describing inputs, processing, and
outputs. Then come sections for external interface requirements, such as user interfaces, hardware
interfaces, software interfaces, and communications interfaces.
Additional specific requirements sections address performance requirements, design constraints such as
standards compliance and hardware limitations, attributes (often called "nonfunctional requirements,"
quality factors, or "ilities"), and other requirements such as database, operations, and site adaptation.
The strength of an SRS is that the extensively-identified set of topics makes sure important areas are
addressed. However, many SRS users also cite this as a weakness, because it necessitates writing so
much.
An SRS has two other less-frequently recognized but nonetheless very important weaknesses. Capturing
each topic in a separate section, which I call "pigeon holing," is good in that it helps assure that all the
topics are addressed, but its separation of related information can dramatically reduce meaning.
Pigeon holing's weakness is especially pronounced with respect to nonfunctional requirements, which is
itself a misnomer based upon an inappropriate pigeon hole model. The term and the SRS's format imply
that these quality factors exist in the abstract, that is, that you just need a bunch of usability, reliability,
maintainability, and so forth. In fact, each is relevant only with regard to specific functionality; your
usability requirements, for instance, may vary considerably with each function to which they pertain.
The other big weakness of an SRS is not the SRS itself, but rather stems from the likelihood that one's
requirements definition is limited to the SRS. The SRS describes the requirements of a
product/system/software that actually is high-level design of a presumed way how to accomplish the
REAL business requirements deliverable whats the organization/users/customers/stakeholders need in
order to provide value. Too often, the SRS capably describes an inappropriate product/system/software
because the REAL business requirements it must satisfy have not been defined adequately, mainly
because people think the SRS is the set of requirements.
Use cases
"Use case" is a (horribly clumsy, in my humble opinion) term coined in Sweden by, I presume, non-native
English speaker Ivar Jacobson, one of object orientation's "three amigos."
Use cases describe step-by-step how an actor interacts with the system, where an actor is usually the
user, but could be another system or a piece of hardware. Many system designers, including me, used
the same type of approach long before Jacobson popularized the term. Use
cases often are considered part of object-oriented development, but they really are not limited to any
particular methodology or even to information systems.
Each use case pertains to a specific business event, which is referred to as the "trigger." An example of a
business event might be placing an order for some item. The use case describes from the perspective of
the customer the set of steps that occur when the customer places the order, for instance:
The above set of steps is a scenario, one possible way of carrying out the use case. The most common
scenario often is called the success scenario, main path, or happy path. Additional scenarios then are
described to account for various alternatives and exceptions. For instance, scenario 2a might be for the
situation where the system cannot find the customer.
Use cases can be hierarchical. For instance, step 7 above could have several subsidiary use cases
describing each of the various ways the customer could pay for the order, such as credit card, check,
cash, purchase order, or charging one's account. Some of those in turn could have subsidiary or
alternative use cases, such as for when a credit card charge is authorized and when it is declined.
Similarly, one use case could refer to another use case. For example, step 2a where the customer was
not found could invoke a separate use case for registering a customer. The Unified Modeling Language
(UML) prescribes a use case diagram which shows the various use cases that may be involved and their
interrelationships, with stick figures representing actors and ovals for use cases.
Use cases are a widely used and highly regarded format for capturing requirements. Because of their
concrete step-by-step format, they are easy for a variety of stakeholders to understand in essentially the
same way. Thus, use cases help developers create systems that act in the manner users expect.
A byproduct benefit of use cases is that they have inherent linkage to test cases. At a minimum, a use
case is tested by carrying it out. Conventional wisdom, which is embodied in some use case definition
tools, says that a use case can be tested thoroughly by having one test case for each scenario path.
Unfortunately, most use cases overlook far too many of their possible paths and one usually needs many
more than one test case to exercise the various likely ways that a given scenario in fact may be invoked.
While use case steps are fundamentally simple sentences as described in the above example, many
organizations write steps that are much longer and more complex, which may improve understanding
but often actually interferes with it. Regardless, one of the common limitations of use cases is that they
don't include adequately detailed information to guide developers or testers as fully as needed.
Use cases describe behavior. The use case format is not well-suited to describe business rules or quality
factors, which, of course, are two important forms of requirements. Some use case writers don't address
business rules and quality factors at all, whereas others include them in separate pigeon hole sections
referenced to the applicable steps, which can interfere with meaning.
A number of prominent requirements authorities state quite categorically that use cases are the user's
requirements and the user's requirements are use cases. While the use case format can be describing
user or business requirements (referred to as a business domain use case), in practice use cases almost
always are system use cases which are really a form of design describing the usage requirements of the
expected computer system.
To me, use cases describe the step-by-step usage behaviors for accomplishing a functional
requirement/specification, which logically means developing some (preferably lightweight) form of an
SRS before writing the use cases. However, some prominent authors suggest defining the use cases first
and then deriving the functional requirements from the use cases.
Especially because use cases are often used in a quasi-religious manner ("How dare you question what
I've come up with! I'm using use cases."), the existence of use cases virtually assures that they describe a
design which has been created in the absence of adequately defined REAL business requirements that
the designed system must satisfy to provide REAL value.
Both SRSs and use cases are valuable formats whose value is increased when their strengths and
weaknesses are recognized and addressed in context.
Result:
Thus, defining the functional and non-functional requirements of the system to be automated by using
Use cases and document in SRS document.
Experiment no 09 1. Use case Vs. Features
Date 2. Functional requirements Vs. Usecases
Aim:
Description:
A traceability matrix (TM) is a document that correlates any two baselined documents which require a
many-to-many relationship comparison, checking the completeness of said relationship.
RTM captures all requirements proposed by the client and their traceability in just one document
delivered at the end of the life-cycle. This is where the software industry comes into play. In a software
development project, RTM is a document used to verify that all of the requirements are connected to
test cases. The principal reason project managers use RTM is to confirm that all requirements will be
accounted for in the testing phase.
RTM usually helps to evaluate the impact of project requirements. When requirements shift in the
middle of a project, a traceability matrix lets you see the impacted workflows, test cases, training
materials, software code, etc.
Whether you’re beginning to explore project management training or have a few years of experience
under your belt, it’s vital that you understand the requirements traceability matrix and how it can
benefit your project management processes and career. Benefits of RTM While we’ve touched on the
utility of the requirements traceability matrix, let’s get into a few of the specifics.
A traceability matrix can help in the effort to provide proper and consistent documentation for your
team. From test cases to wireframes to user stories, a TM allows you to see if a requirement is fully
documented. A requirement traceability matrix can even call attention to missing requirements.
Tackling Defects
Let’s say you’re working on a rather extensive software project in the test manager role.
With a sizeable backlog of defects, you must tackle the most relevant issues first. A traceability matrix
can aid you in filtering defects linked to crucial requirements, along with defect severity, priority, and
more. Finally, RTM establishes complete test coverage.
Now that you know the benefits let’s review the steps for creating a requirements traceability matrix.
If you’re new to RTM, this may seem like a lot of information to digest in one sitting.
Fortunately, there are plenty of videos, templates, examples, and tutorials to get you started on the
learning path. Here is the quick and straightforward version:
1. Establish your RTM goals by laying out your reason for creating the RTM.
2. Gather all accessible requirement documentation, such as the technical requirement document
(BRD). You’ll also need testing documentation, like test cases, results, and bugs.
3. To make a simple RTM document, you can use an Excel spreadsheet. Create columns for
business requirements, functional requirements, test cases, test results, and bugs. Then, record
4. Take the FRD and record all corresponding functional requirements for every business
requirement.
Now that you have a basic idea of how to create RTM let’s look at types and tools.
Types of Traceability Matrix
There are three types of RTM: forward traceability, backward traceability, and bidirectional traceability.
1. Forward Traceability
Forward traceability is used to map the requirements to the test cases. Not only will this establish that
every requirement is being tested from top to bottom, but it will also assist in confirming that a project’s
trajectory is sound.
You can make a backward traceability matrix by mapping test cases with the requirements. Doing so aids
you in avoiding “scope creep” and going beyond the initial requirements unnecessarily.
3. Bidirectional Traceability
Bidirectional traceability essentially combines forward and backward traceability into one document.
This type is useful because it establishes that each requirement has relating test cases. Requirements
Traceability Matrix (RTM) Tools If an Excel spreadsheet isn’t your style, consider these requirements
management tools:
Visure Requirements: This tool is provided by Visure Solutions, which is focused on business-critical
and safety-critical industries. Its Visure Requirements tool provides complete traceability.
Modern Requirements4DevOps: This tool is integrated with Microsoft's Azure DevOps, TFS,
and VSTS, and gives project managers traceability through every stage of the process.
ReQtest: Providing traceability from project start to finish; this tool is based in the cloud. It has a very
customizable requirement module that assists project managers in quickly evaluating and tracing
changes.
The actual customers’ requirements are listed down in a document known as Business
Requirements Document (BRS). This BRS is minutely derived high-level requirement list, after a brief
interaction with the client.
It is usually prepared by ‘Business Analysts’ or the project ‘Architect’ (depending upon organization or
project structure). The ‘Software Requirement Specifications’ (SRS) document is derived from BRS.
It is a detailed document that contains all the meticulous details of all functional and non-
functional requirements. This SRS is the baseline for designing and developing software applications.
The PRD is a reference document for all the team members in a project to tell them exactly what
a product should do. It can be divided into sections like Purpose of the product, Product Features,
It is the document that helps in designing and implementing the software as per the business needs. It
maps the interactions between an actor and an event with a role that needs to be performed to achieve
a goal. It is a detailed step-by-step description of how a task needs to be performed.
For Example,
Actor: Customer
Use Cases may also be a part included in the SRS document as per the organization’s work process.
It is documented containing all the details related to defects. The team can maintain a ‘Defect
Verification’ document for fixing and retesting of the defects. The testers can refer ‘Defect
Verification’ document, when they want to verify if the defects are fixed or not, retest defects on
different OS, device, different system configuration, etc.
The ‘Defect Verification’ document is handy and important when there are a dedicated defect fixing and
verification phase.
Currently, all of the software industries are moving towards the use of User Stories and Agile
#1) The build developed and tested has the required functionality which meets the ‘Customers’/ ‘Users’
needs and expectations. The customer must get what he wants. To surprise the customer with an
application that does not do what it’s expected to do is not a satisfying experience for anyone.
#2) The end product (Software Application) developed and delivered to the customer must encompass
only the functionality that’s needed and expected. Extra features provided in the software application
may seem attractive initially until there is an overhead of time, money, and effort to develop it.
The extra feature may also become a source of Defects, which can cause problems for a customer after
installation.
#3) Developer’s initial task gets defined clearly as they work first on implementing the requirements,
which are of high- priority, as per the customer requirement. If customer’s high– priority requirements
are clearly specified, then those code components can be developed and implemented on first priority.
Thus it is ensured that the chances of the end-product being shipped to the customer is as per the
topmost requirements and is on schedule.
#4) Testers verify first the most important functionality implement by developers. As the verification
(Testing) of the priority software component is done first it helps to determine when and if the first
versions of the system are ready to be released.
#5) Accurate Test plans, Test cases are written and executed which verify that all of the application
requirements are implemented correctly. Test cases mapping with the requirements helps to ensure
that no major defects are missed. It further helps in implementing a quality product as per customer
expectations.
#6) In case there is ‘Change Request’ from the client, all of the application components that are affected
by the change request get modified and nothing gets overlooked. This further enhances in evaluating,
the impact a change request does to the software application.
#7) A seemingly simple change request might implicate modifications that need to be done to several
parts of the application. It’s better to derive a conclusion on how much effort will be required before
agreeing to make the change.
If there are any changes that are suggested by the Stakeholders, the same needs to be communicated to
the Development and Testing teams in the earlier phases of development. Without this on-time
Development, Testing of application and capturing /fixing of defects cannot be ensured.
Identifying which are high-priority, complex, and important test scenarios is a difficult task. Trying to test
all of the Test scenarios is almost an unachievable task. The goal of testing the scenarios must be very
clear from the business and end-user point of view.
The Testing process must be clearly defined considering factors like technical infrastructure and
implementations, team skills, past experiences, organizational structures and processes followed,
project estimations related to cost, time and resources and location of the team as per the time zones.
A uniform process implementation considering the mentioned factors ensures every individual
concerned with the project is on the same page. This helps in a smooth flow of all the processesrelated
to application development.
Resources are of two types, skilled-domain specific testers and the testing tools used by the testers. If
the testers have proper knowledge of the domain they can write and implement effective test scenarios
and scripts. To implement these scenarios and scripts the testers should be well equipped with
appropriate ‘Testing Tools’.
Good implementation and on-time delivery of the application to the customer can be ensured by the
only skilled tester and appropriate testing tools.
‘Test Strategy’ in itself is a big and a separate topic of discussion. But here for ‘Test Coverage’ an
effective test strategy implementation ensures that the ‘Quality’ of the application is good and it is
maintained over the period of time everywhere. An effective ‘Test Strategy’ plays a major role in
planning ahead for all kinds of critical challenges, which further helps in developing a better application.
Result:
Thus, defining of Usecase Vs. Features , Functional requirements Vs. Usecases is completed
successfully .
Experiment no 10 1. Function point metric
Date 2. Usecase point metric
Aim:
To, Estimate the effort using the following methods for the system to be automated:
1. Function point metric
2. Usecase point metric
Dscription:
Function point metric
Function point metrics provide a standardized method for measuring the various functions of a
software application. It measures the functionality from the user’s point of view, that is, on the
basis of what the user requests and receives in return. Function point analysis is a standard
method for measuring software development from the user's point of view.
The Function Point measure originally conceived by Albrecht received increased popularity with
the inception of the International Function Point Users Group (IFPUG) in 1986. In 2002, IFPUG
Function Points became an international ISO standard – ISO/IEC 20926.
● For files (ILF and ELF), the rating is based on the RET and DET.
o RET − The number of user-recognizable data elements in an ILF or ELF.
o DET − The number of user-recognizable fields.
RETs DETs
o Based on the following table, an ILF that contains 10 data elements and 5 fields
would be ranked as high.
● Convert ratings into UFCs.
Rating Values
EO EQ EI ILF ELF
Low 4 3 3 7 5
Average 5 4 4 10 7
High 6 5 6 15 10
GSC 2 Distributed data How are distributed data and processing functions
processing handled?
GSC 4 Heavily used How heavily used is the current hardware platform where
configuration the application will be executed?
GSC 6 On-Line data entry What percentage of the information is entered online?
GSC 7 End-user efficiency Was the application designed for end-user efficiency?
GSC 8 On-Line update How many ILFs are updated by online transaction?
Simple ≤3 5
Average 4 to 7 10
Complex >7 15
Step 1.1.3 − Repeat for each Use-Case and get all the Use-Case Weights. Unadjusted Use-
Case Weight (UUCW) is the sum of all the Use-Case Weights.
Step 1.1.4 − Find Unadjusted Use-Case Weight (UUCW) using the following table −
Where,
NSUC is the no. of Simple Use-Cases.
NAUC is the no. of Average Use-Cases.
NCUC is the no. of Complex Use-Cases.
Step 1.2 − Determine Unadjusted Actor Weight.
An Actor in a Use-Case might be a person, another program, etc. Some actors, such as a
system with defined API, have very simple needs and increase the complexity of a Use-Case
only slightly.
Some actors, such as a system interacting through a protocol have more needs and increase
the complexity of a Use-Case to a certain extent.
Other Actors, such as a user interacting through GUI have a significant impact on the
complexity of a Use-Case. Based on these differences, you can classify actors as Simple,
Average and Complex.
Step 1.2.1 − Classify Actors as Simple, Average and Complex and assign Actor Weights as
shown in the following table −
Step 1.2.2 − Repeat for each Actor and get all the Actor Weights. Unadjusted Actor Weight
(UAW) is the sum of all the Actor Weights.
Step 1.2.3 − Find Unadjusted Actor Weight (UAW) using the following table −
Where,
NSA is the no. of Simple Actors.
NAA is the no. of Average Actors.
NCA is the no. of Complex Actors.
Step 1.3 − Calculate Unadjusted Use-Case Points.
The Unadjusted Use-Case Weight (UUCW) and the Unadjusted Actor Weight (UAW) together
give the unadjusted size of the system, referred to as Unadjusted Use-Case Points.
Unadjusted Use-Case Points (UUCP) = UUCW + UAW
The next steps are to adjust the Unadjusted Use-Case Points (UUCP) for Technical Complexity
and Environmental Complexity.
T6 Easy to install .5
T7 Easy to use .5
T8 Portable 2.0
T6 Easy to install .5
T7 Easy to use .5
T8 Portable 2.0
F2 Application experience .5
F5 Motivation 1.0
Step 3.2 − For each of the 8 Factors, assess the project and rate from 0 (irrelevant) to 5 (very
important).
Step 3.3 − Calculate the Impact of the Factor from Impact Weight of the Factor and the Rated
Value for the project as
Impact of the Factor = Impact Weight × Rated Value
Step 3.4 − Calculate the sum of Impact of all the Factors. This gives the Total Environment
Factor (EFactor) as given in the following table −
F2 Application experience .5
F5 Motivation 1.0
Aim:
To, develop a tool which can be used for quantification of all the non-functional requirements
Description:
If you’ve ever dealt with non-functional requirements, you may know that different sources and guides
use different terminology. For instance, the ISO/IEC 25000 standards framework defines non-functional
requirements as system quality and software quality requirements. BABOK, one of the main knowledge
sources for business analysts, suggests the term non-functional requirements (NFR), which is currently
the most common definition. Nevertheless, these designations consider the same type of matter – the
requirements that describe operational qualities rather than a behavior of the product. The list of them
also varies depending on the source. And, frankly, it may differ for different products. For instance, if
you intend to collect any user data and your website operates in the EU, you must meet GDPR
compliance rules. In some cases, this may not be relevant to you. Or you may have additional
compliance requirements if you process payments. In this article, we’ll cover only the most common
types that should make it to your checklist. However, there may be hundreds of them. Usually, such
sources as BABOK list non- functional requirements in an isolated manner. We grouped some of them
since the approaches to documenting these requirements overlap and some can’t be estimate without
the other ones: Performance and scalability. How fast does the system return results? How much will
this performance change with higher workloads? Portability and compatibility.
Which hardware, operating systems, browsers, and their versions does the software run on?
Does it conflict with other applications and processes within these environments?
Reliability, availability, maintainability. How often does the system experience critical failures? and how
much time is it available to users against downtimes? Security. How are the system and its data
protected against attacks?
Localization. Does the system match local specifics? Usability. How easy is it for a customer to use the
system?
Performance defines how fast a software system or its particular piece responds to certain users’ actions
under certain workload. In most cases, this metric explains how much a user must wait before the target
operation happens (the page renders, a transaction is processed, etc.) given the overall number of users
at the moment. But it’s not always like that. Performance requirements may describe background
processes invisible to users, e.g. backup. But let’s focus on user-
centric performance.
Scalability assesses the highest workloads under which the system will still meet the performance
requirements.
How to approach
Start with Google recommendations for regular web pages. Google is very sensitive about desktop and
mobile speed load times. So, if you look for performance guidance for regular web pages that all users
have access to, check Google’s page speed insights. The search engine considers multiple scenarios,
including the type of connection, mobile or desktop load, and type of content that gets displayed. Based
on the sum of factors, it suggests different performance scores that you can estimate for your website.
This is especially important if you set up the requirements for landing pages, as Google might rank your
page lower in consideration of its speed.
Google estimates performance based on multiple factors Check basic response time recommendations.
Jakob Nielsen back in 1993 outlined 3 main metrics for response time. While this outline may seem
ancient, the metrics are still meaningful as they are generally based on the way human attention works:
0.1 second – the limit after which the system reaction doesn’t seem instantaneous;
1 second – when user will notice the delay, but without interrupted flow of thought;
Usually, you don’t want to reach this 10-second threshold, as about 40 percent of users will abandon a
website after 3 seconds.
Specify the measurement scenario. Does your metric include browser rendering or only the time it takes
to deliver data to a browser? If different types of content load at different pace, you may have different
time constraints for text, images, and videos.
Specify the current workload for a measurement. Since you may have, say, 5 thousand users on average
during the day and 1 thousand at night, define which load scenarios you document.
Maybe you document both, maybe you want to set up the highest threshold.
Don’t include the time it takes to deliver results by third parties. If your operation depends on calls that
return data from a third-party API, your development team won’t be able to take responsibility for that.
Acknowledge the architectural constraints. If developers are dealing with an enterprise solution or a
legacy system, there may be very few ways to improve performance without reworking the entire
architecture.
Consider scalability. We also included scalability in this section, since it considers the maximum load that
the system doesn’t necessarily process now, but may process in the near future. For instance, you
expect that the number of sessions in the application will double after a marketing campaign and you
still want to preserve the existing performance. Although it’s hard to make predictions in advance, it’s
worth setting at least some load expectations.
The landing page supporting 5 thousand users per hour must provide 6 seconds or less response time in
a Chrome desktop browser, including the rendering of text and images, over an LTE connection.
Portability defines how a system or its element can be launched on one environment or another.
It usually includes hardware, software, or other usage platform specification. Put simply, it establishes
how well actions performed via one platform are run on another. Also, it prescribes how well system
elements may be accessed and may interact from two different environments.
Portability also has an additional aspect called compatibility. Compatibility defines how a system can co-
exist with another system in the same environment. For instance, software installed on an operating
system must be compatible with its firewall or antivirus protection.
Portability and compatibility are established in terms of operating systems, hardware devices, browsers,
software systems, and their versions. For now, a cross-platform, cross-browsing, and mobile-responsive
solution is a common standard for web applications. Portability non-functional requirements are usually
based on preliminary market research, field research activities, or analytics reports on the types of
software and devices the target audience has. If you are working within a corporate environment and
the software will be accessed through a documented list of devices and operating systems, it’s quite
easy to define compatibility and portability.
How to approach
Infer portability requirements from your analytics tools if you can. If you have access to visitor data via
Google Analytics or other analytical platforms, you can assess which types of devices, browsers, and
their versions are most frequently used. Consider the fullest list of portability requirements. Not only
will this document provide guidance to engineers, it will also outline the scope of testing scenarios:
Network specifics,
Define compatibility with other applications, including 3rd parties. If the system must coexist with third-
party software or other applications in the software ecosystem, include them.
3.6
3.3
3.4
4.3
2.3
While these three types of requirements are usually documented separately, we aggregate them in one
section, since they approach the same sort of a problem from different angles. Another thing to keep in
mind with these requirements is that they are extremely hard to express in calculative terms. And,
frankly, many system providers don’t document them at all. Let’s see. Reliability. This quality attribute
specifies how likely the system or its element would run without a failure for a given period of time
under predefined conditions. Traditionally, it’s expressed as a probability percentage. For instance, if the
system has 85 percent reliability for a month, this means that during this month, under normal usage
conditions, there’s an 85 percent chance that the system won’t experience critical failure. As you may
have guessed, it’s fairly tricky to define critical failure, time, and normal usage conditions. Another,
somewhat simpler approach to that metric is to count the number of critical bugs found in production
for some period of time or calculate a mean time to failure. Three ways to measure it are:
Maintainability. Maintainability defines the time required for a solution or its component to be fixed,
changed to increase performance or other qualities, or adapted to a changing environment. Like
reliability, it can be expressed as a probability of repair during some time. For example, if you have 75
percent maintainability for 24 hours, this means that there’s a 75 percent chance the component can be
fixed in 24 hours.
Availability. And finally, availability describes how likely the system is accessible for a user at a given
point in time. While it can be expressed as a probability percentage, you may also define it as a
percentage of time the system is accessible for operation during some time period. For instance, the
system may be available 98 percent of the time during a month. Availability is perhaps the most
business-critical requirement, but to define it, you also must have estimations for reliability and
maintainability.As you can see, these three metrics are closely connected. And more importantly, you
should approach them together if you decide to document them as non-functional requirements to your
system.
NFRs are often thought of as the “itys.” While the specifics will vary between products, having a
list of these NFR types defined up front provides a handy checklist to make sure you’re not
NFR “Itys”
Security — Does your product store or transmit sensitive information? Does your IT department
require adherence to specific standards? What security best practices are used in your industry?
Capacity — What are your system’s storage requirements, today and in the future? How will
Reliability and Availability — What is the critical failure time under normal usage? Does a
Maintainability + Manageability—How much time does it take to fix components, and how
easily can an administrator manage the system? Under this umbrella, you could also define
Recoverability and Serviceability.
What are the highest workloads under which the system will still perform as expected?
Usability —
How easy is it to use the product? What defines the experience of using the product?
Performance — How quickly does the system respond to users’ actions, or how long does a user wait for
a specific operation to happen?
Environmental — What types of environments will the system be expected to perform within?
Result:
Thus, developing a tool which can be used for quantification of all the non-functional requirements
is completed successfully.
Experiment no 12
Types of coupling.
Date
Aim:
To write Java program for classifying the various types of coupling.
Coupling refers to the usage of an object by another object. It can also be termed as collaboration. This
dependency of one object on another object to get some task done can be classified into the following
two types −
● Tight coupling - When an object creates the object to be used, then it is a tight coupling situation.
As the main object creates the object itself, this object can not be changed from outside world
easily marked it as tightly coupled objects.
● Loose coupling - When an object gets the object to be used from the outside, then it is a loose
coupling situation. As the main object is merely using the object, this object can be changed from
the outside world easily marked it as loosely coupled objects.
Example - Tight Coupling
Tester.java
Output
A
B
A a = new A(b);
//a.display() will print A and B
a.display();
A a1 = new A(c);
//a.display() will print A and C
a1.display();
}
}
interface Show {
public void display();
}
class A {
Show s;
public A(Show s) {
//s is loosely coupled to A
this.s = s;
}
public void display() {
System.out.println("A");
s.display();
}
}
class B implements Show {
public B(){}
public void display() {
System.out.println("B");
}
}
class C implements Show {
public C(){}
public void display() {
System.out.println("C");
}
}
Thus to write the Java program for classifying the various types of coupling is completed
successfully and output is verified.
Experiment no 13
Date
Types of cohesion
Aim:
To write the java program for implementing types of cohesion.
Cohesion in Java is the Object-Oriented principle most closely associated with making sure that a class is designed
with a single, well-focused purpose. In object-oriented design, cohesion refers to how a single class is designed.
Note: The more focused a class is, the more is the cohesiveness of that class.
The advantage of high cohesion is that such classes are much easier to maintain (and less frequently changed) than
classes with low cohesion. Another benefit of high cohesion is that classes with a well-focused purpose tend to be
more reusable than other classes.
Example: Suppose we have a class that multiplies two numbers, but the same class creates a pop-up window
displaying the result. This is an example of a low cohesive class because the window and the multiplication operation
don’t have much in common. To make it high cohesive, we would have to create a class Display and a class Multiply.
The Display will call Multiply’s method to get the result and display it. This way to develop a high cohesive solution.
Let us understand the structure of the high cohesive program:
Program:
class Multiply {
int a = 5;
int b = 5;
this.a = a;
this.b = b;
return a * b;
}}
class Display {
System.out.println(m.mul(5, 5));
Output
25
Result:
Thus, to write the java program for implementing types of cohesion is completed successfully
and output is verified.
Aim:
Description:
A Data Flow Diagram (DFD) is a traditional visual representation of the information flows
within a system. A neat and clear DFD can depict the right amount of the system requirement
graphically. It can be manual, automated, or a combination of both.
It shows how data enters and leaves the system, what changes the information, and where data is
stored.
The objective of a DFD is to show the scope and boundaries of a system as a whole. It may be
used as a communication tool between a system analyst and any person who plays a part in the
order that acts as a starting point for redesigning a system. The DFD is also called as a data flow
graph or bubble chart.
1. All names should be unique. This makes it easier to refer to elements in the DFD.
2. Remember that DFD is not a flow chart. Arrows is a flow chart that represents the order
of events; arrows in DFD represents flowing data. A DFD does not involve any order of
events.
3. Suppress logical decisions. If we ever have the urge to draw a diamond-shaped box in a
DFD, suppress that urge! A diamond-shaped box is used in flow charts to represents
decision points with multiple exists paths of which the only one is taken. This implies an
ordering of events, which makes no sense in a DFD.
4. Do not become bogged down with details. Defer error conditions and error handling until
the end of the analysis.
Standard symbols for DFDs are derived from the electric circuit diagram analysis and are shown
in fig:
Circle: A circle (bubble) shows a process that transforms data inputs into data outputs.
Data Flow: A curved line shows the flow of data into or out of a process or data store.
Data Store: A set of parallel lines shows a place for the collection of data items. A data store
indicates that the data is stored which can be used at a later stage or by the other processes in a
different order. The data store can have an element or group of elements.
Source or Sink: Source or Sink is an external entity and acts as a source of system inputs or sink
of system outputs.
Levels in Data Flow Diagrams (DFD)
The DFD may be used to perform a system or software at any level of abstraction. Infact, DFDs
may be partitioned into levels that represent increasing information flow and functional detail.
Levels in DFD are numbered 0, 1, 2 or beyond. Here, we will see primarily three levels in the
data flow diagram, which are: 0-level DFD, 1-level DFD, and 2-level DFD.
0-level DFDM
It is also known as fundamental system model, or context diagram represents the entire software
requirement as a single bubble with input and output data denoted by incoming and outgoing
arrows. Then the system is decomposed and described as a DFD with multiple bubbles. Parts of
the system represented by each of these bubbles are then decomposed and documented as more
and more detailed DFDs. This process may be repeated at as many levels as necessary until the
program at hand is well understood. It is essential to preserve the number of inputs and outputs
between levels, this concept is called leveling by DeMacro. Thus, if bubble "A" has two inputs
x1 and x2 and one output y, then the expanded DFD, that represents "A" should have exactly two
external inputs and one external output as shown in fig:
The Level-0 DFD, also called context diagram of the result management system is shown in fig.
As the bubbles are decomposed into less and less abstract bubbles, the corresponding data flow
may also be needed to be decomposed.
1-level DFD
In 1-level DFD, a context diagram is decomposed into multiple bubbles/processes. In this level,
we highlight the main objectives of the system and breakdown the high-level process of 0-level
DFD into subprocesses.
2-Level DFD
2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to project or
record the specific/necessary detail about the system's functioning.
Result:
Thus, Converting the DFD into appropriate architecture styles is completed successfully and
output is verified
Experiment no 15 Complete class diagram and object diagrams
Date using Rational tools
Aim:
To draw the complete class diagram and object diagrams using Rational tools.
Description:
UML Tools
Since UML is a Unified Modeling Language, it is used to create meaningful, object- oriented models for a
software application. It clearly represents the working of any hardware/ software system. There are
numerous tools, both commercial and open-source, which are available for designing UML diagrams, are
enlisted below:
1. StarUML
StarUML is an open-source software modeling tool, which is provided by MKLab. It has come up with
eleven different types of modeling diagrams. It also supports UML2.0 specified diagrams.
Features:
o It let you create Object, Use case, Deployment, Sequence, Collaboration, Activity, and Profile
diagrams.
2. Umbrello
Umbrello is a Unified Modeling language tool, which is based on KDE technology. It supports both
reverse engineering and code generation for C++and Java
Features:
The UML designer tool helps in modifying and envisioning UML2.5 models. It allows you to create all of
the UML diagrams
Features:
o With the UML designer tool, the user can reuse the provided presentations.
o To start working with DSL, you can use UML legacy models.
5.Altova
Altova has provided UModel, which is another UML software modeling tool. It supports all types of 14
UML2 diagrams as well as SysML for the embedded systems. It also holds up for business process
modeling for enterprise analysts. It generates visually designed software models by incorporating Java,
C++, and C #or Visual Basic .NET
Features:
o It also provides an intuitive color-coding, icons, customized alignment grid, and cascading styles
for colors, fonts line size.
5.Umple
Umple is an object-oriented and modeling language that textually supports state diagrams and class
diagrams. It adapts JAVA, C++, and PHP which results in more readable and short lines of code.
Features:
o It includes Singleton pattern, keys, immutability, mixins, and aspect-oriented code injection,
which makes UML more understandable to the users.
6.Visual Paradigm
A visual Paradigm is a tool that supports SysML, UML2, and Business Process Modeling Notation from
Object Management Group. It involves report generation as well as code generation.
Features:
The class diagram depicts a static view of an application. It represents the types of objects residing in the
system and the relationships between them. A class consists of its objects, and also it may inherit from
other classes. A class diagram is used to visualize, describe, document various different aspects of the
system, and also construct executable software code.
It shows the attributes, classes, functions, and relationships to give an overview of the software system. It
constitutes class names, attributes, and functions in a separate compartment that helps in software
development. Since it is a collection of classes, interfaces, associations, collaborations, and constraints, it
is termed as a structural diagram.
The main purpose of class diagrams is to build a static view of an application. It is the only diagram that is
widely used for construction, and it can be mapped with object-oriented languages. It is one of the most
popular UML diagrams. Following are the purpose of class diagrams given below:
o Upper Section: The upper section encompasses the name of the class. A class is a
representation of similar objects that shares the same relationships, attributes, operations,and semantics.
Some of the following rules that should be taken into account while representing a class are given below:
o Middle Section: The middle section constitutes the attributes, which describe the quality of the
class. The attributes have the following characteristics:
. The attributes are written along with its visibility factors, which are public (+), private (-), protected (#),
and package (~).
b. A meaningful name should be assigned to the attribute, which will explain its usage inside the
class.
o Lower Section: The lower section contain methods or operations. The methods are represented
in the form of a list, where each method is written in a single line. It demonstrates how a class interacts
with data.
Some key points that are needed to keep in mind while drawing a class diagram are given below:
1. To describe a complete aspect of the system, it is suggested to give a meaningful name to the
class diagram.
4. A minimum number of desired properties should be specified as more number of the unwanted
property will lead to a complex diagram.
5. Notes can be used as and when required by the developer to describe the aspects of a diagram.
6. The diagrams should be redrawn and reworked as many times to make it correct before
producing its final version.
Object diagrams are dependent on the class diagram as they are derived from the class diagram. It
represents an instance of a class diagram. The objects help in portraying a static view of an object-
oriented system at a specific instant.
Both the object and class diagram are similar to some extent; the only difference is that the class diagram
provides an abstract view of a system. It helps in visualizing a particular functionality of a system.
The object diagram holds the same purpose as that of a class diagram. The class diagram provides an
abstract view which comprises of classes and their relationships, whereas the object diagram represents
an instance at a particular point of time.
The object diagram is actually similar to the concrete (actual) system behavior. The main purpose is to
depict a static view of a system.xception Handling in Java
1. All the objects present in the system should be examined before start drawing the object
diagram.
2. Before creating the object diagram, the relation between the objects must be acknowledged.
The following are the application areas where the object diagrams can be used.
4. Reverse engineering.
Result:
Thus, drawing the complete class diagram and object diagrams using Rational tools is completed
successfully and output is verified.
Aim:
To Define the design activities along with necessary artifacts using Design Document.
To make the development of a complete software system manageable, distinct collections of information
are organized into artifact sets. Artifact represents cohesive information that typically is developed and
reviewed as a single entity.
Life-cycle software artifacts are organized into five distinct sets that are roughly partitioned by the
underlying language of the set: management (ad hoc textual formats), requirements (organized text and
models of the problem space), design (models of the solution space), implementation (human-readable
programming language and associated source files), and deployment (machine-process able languages
and associated files). The artifact sets are shown in Figure
Management set artifacts are evaluated, assessed, and measured through a combination of the
following:
• Analysis of changes between the current version of the artifact and previous versions
• Major milestone demonstrations of the balance among all artifacts and, in particular, the accuracy
of the business case and vision artifacts
The engineering sets consist of the requirements set, the design set, the implementation set, and the
deployment set.
Requirements Set
Requirements artifacts are evaluated, assessed, and measured through a combination of the following:
• Mapping against the design, implementation, and deployment sets to evaluate the consistency
andcompleteness and the semantic balance between information in the different sets
• Analysis of changes between the current version of requirements artifacts and previous
versions(scrap, rework, and defect elimination trends)
Design Set
UML notation is used to engineer the design models for the solution. The design set contains
varyinglevels of abstraction that represent the components of the solution space (their identities,
attributes, static relationships, dynamic interactions). The design set is evaluated, assessed, and
measured through a combination of the following:
• Translation into implementation and deployment sets and notations (for example, traceability,
source code generation, compilation, linking) to evaluate the consistency and completeness and the
semantic balance between information in the sets
• Analysis of changes between the current version of the design model and previous versions
(scrap, rework, and defect elimination trends)
Implementation set
The implementation set includes source code (programming language notations) that represents the
tangible
• Translation into deployment set notations (for example, compilation and linking) to evaluate
theconsistency and completeness among artifact sets
• Execution of stand-alone component test cases that automatically compare expected results with
actual results
• Analysis of changes between the current version of the implementation set and previous
versions(scrap, rework, and defect elimination trends)
Deployment Set
The deployment set includes user deliverables and machine language notations, executable software,
and the build scripts, installation scripts, and executable target specific data necessary to use the product
in its target environment.
Deployment sets are evaluated, assessed, and measured through a combination of the following:
• Testing against the usage scenarios and quality attributes defined in the requirements set to
evaluate the consistency and completeness and the~ semantic balance between information in the two
sets
• Testing the partitioning, replication, and allocation strategies in mapping components of the
implementation set to physical resources of the deployment system (platform type, number, network
topology)
• Testing against the defined usage scenarios in the user manual such as installation, user-
oriented dynamic reconfiguration, mainstream usage, and anomaly management
• Analysis of changes between the current version of the deployment set and previous versions
(defect elimination trends, performance changes)
Each artifact set is the predominant development focus of one phase of the life cycle; the other sets take
on check and balance roles. As illustrated in Figure below each phase has a predominant focus:
Requirements are the focus of the inception phase; design, the elaboration phase; implementation, the
construction phase; and deployment, the transition phase. The management artifacts also evolve, but at
a fairly constant level across the life cycle.
Most of today's software development tools map closely to one of the five artifact sets.
4. Implementation: compiler/debugger tools, code analysis tools, test coverage analysis tools, and
testmanagement tools
5. Deployment: test coverage and test automation tools, network management tools, commercial
components (operating systems, GUIs, RDBMS, networks, middleware), and installation tools.
• Dynamically reconfigurable parameters (buffer sizes, color palettes, number of servers, number
ofsimultaneous clients, data files, run-time parameters)
• Performance under certain allocation strategies (centralized versus distributed, primary and
shadow threads, dynamic load balancing, hot backup versus checkpoint/rollback)
• Virtual machine constraints (file descriptors, garbage collection, heap size, maximum record
size,disk file rotations)
Each state of development represents a certain amount of precision in the final system description. Early
in the life cycle, precision is low and the representation is generally high. Eventually, the precision of
representation is high and everything is specified in full detail. Each phase of development focuses on a
particular artifact set. At the end of each phase, the overall system state will have progressed on all sets,
as illustrated in Figure
The inception phase focuses mainly on critical requirements usually with a secondary focus on an initial
deployment view. During the elaboration phase, there is much greater depth in requirements, much more
breadth in the design set, and further work on implementation and deployment issues. The main focus of
the construction phase is design and implementation. The main focus of the transition phase is on
achieving consistency and completeness of the deployment set in the context of the other sets.
TEST ARTIFACTS
• The test artifacts must be developed concurrently with the product from inception through
deployment. Thus, testing is a full-life-cycle activity, not a late life-cycle activity.
• The test artifacts are communicated, engineered, and developed within the same artifact sets as
thedeveloped product.
• The test artifacts are implemented in programmable and repeatable formats (as
softwareprograms).
• The test artifacts are documented in the same way that the product is documented.
• Developers of the test artifacts use the same tools, techniques, and training as the
softwareengineers developing the product.
Test artifact subsets are highly project-specific, the following example clarifies the relationship between
test artifacts and the other artifact sets. Consider a project to perform seismic data processing for the
purpose of oil exploration. This system has three fundamental subsystems: (1) a sensor subsystem that
captures raw seismic data in real time and delivers these data to (2) a technical operations subsystem
that converts raw data into an organized database and manages queries to this database from (3) a
display subsystem that allows workstation operators to examine seismic data in human-readable form.
Such a system would result in the following test artifacts:
• Management set. The release specifications and release descriptions capture the objectives,
evaluation criteria, and results of an intermediate milestone. These artifacts are the test plans and test
results negotiated among internal project teams. The software change orders capture test results
(defects, testability changes, requirements ambiguities, enhancements) and the closure criteria
associated with making a discrete change to a baseline.
• Requirements set. The system-level use cases capture the operational concept for the system
and the acceptance test case descriptions, including the expected behavior of the system and its
quality attributes. The entire requirement set is a test artifact because it is the basis of all assessment
activities across the life cycle.
• Design set. A test model for nondeliverable components needed to test the product baselines is
captured in the design set. These components include such design set artifacts as a seismic event
simulation for creating realistic sensor data; a "virtual operator" that can support unattended, after- hours
test cases; specific instrumentation suites for early demonstration of resource usage; transaction rates or
response times; and use case test drivers and component stand-alone test drivers.
• Implementation set. Self-documenting source code representations for test components and test
drivers provide the equivalent of test procedures and test scripts. These source files may also include
human-readable data files representing certain statically defined data sets that are explicit test source
files. Output files from test drivers provide the equivalent of test reports.
• Deployment set. Executable versions of test components, test drivers, and data files are
provided.
MANAGEMENT ARTIFACTS
The management set includes several artifacts that capture intermediate results and ancillary information
necessary to document the product/process legacy, maintain the product, improve the product, and
improve the process.
Business Case
The business case artifact provides all the information necessary to determine whether the project is
worth investing in. It details the expected revenue, expected cost, technical and management plans, and
backup data necessary to demonstrate the risks and realism of the plans. The main purpose is to
transform the vision into economic terms so that an organization can make an accurate ROI assessment.
The financial forecasts are evolutionary, updated with more accurate forecasts as the life cycle
progresses. Figure 6-4 provides a default outline for a business case.
The software development plan (SDP) elaborates the process framework into a fully detailed plan. Two
indications of a useful SDP are periodic updating (it is not stagnant shelfware) and understanding and
acceptance by managers and practitioners alike. Figure 6-5 provides a default outline for a software
development plan.
Work Breakdown Structure
Work breakdown structure (WBS) is the vehicle for budgeting and collecting costs. To monitor and control
a project's financial performance, the software project man1ger must have insight into project costs and
how they are expended. The structure of cost accountability is a serious project planning constraint.
Managing change is one of the fundamental primitives of an iterative development process. With greater
change freedom, a project can iterate more productively. This flexibility increases the content, quality,
and number of iterations that a project can achieve within a given schedule. Change freedom has been
achieved in practice through automation, and today's iterative development environments carry the
burden of change management. Organizational processes that depend on manual change management
techniques have encountered major inefficiencies.
Release Specifications
The scope, plan, and objective evaluation criteria for each baseline release are derived from the vision
statement as well as many other sources (make/buy analyses, risk management concerns, architectural
considerations, shots in the dark, implementation constraints, quality thresholds). These artifacts are
intended to evolve along with the process, achieving greater fidelity as the life cycle progresses and
requirements understanding matures. Figure 6-6 provides a default outline for a release specification
Release Descriptions
Release description documents describe the results of each release, including performance against each
of the evaluation criteria in the corresponding release specification. Release baselines should be
accompanied by a release description document that describes the evaluation criteria for that
configuration baseline and provides substantiation (through demonstration, testing, inspection, or
analysis) that each criterion has been addressed in an acceptable manner. Figure 6-7 provides a default
outline for a release description.
Status Assessments
Status assessments provide periodic snapshots of project health and status, including the software
project manager's risk assessment, quality indicators, and management indicators. Typical status
assessments should include a review of resources, personnel staffing, financial data (cost and revenue),
top 10 risks, technical progress (metrics snapshots), major milestone plans and results, total project or
product scope & action items
Environment
An important emphasis of a modern approach is to define the development and maintenance
environment as a first-class artifact of the process. A robust, integrated development environment must
support automation of the development process. This environment should include requirements
management, visual modeling, document automation, host and target programming tools, automated
regression testing, and continuous and integrated change management, and feature and defect tracking.
Deployment
A deployment document can take many forms. Depending on the project, it could include several
document subsets for transitioning the product into operational status. In big contractual efforts in which
the system isdelivered to a separate maintenance organization, deployment artifacts may include
computer system operations manuals, software installation manuals, plans and procedures for cutover
(from a legacy system), site surveys, and so forth. For commercial software products, deployment
artifacts may include marketing plans, sales rollout kits, and training courses.
In each phase of the life cycle, new artifacts are produced and previously developed artifacts are updated
to incorporate lessons learned and to capture further depth and breadth of the solution. Figure 6-8
identifies a typical sequence of artifacts across the life-cycle phases.
ENGINEERING ARTIFACTS
Most of the engineering artifacts are captured in rigorous engineering notations such as UML,
programming languages, or executable machine codes. Three engineering artifacts are explicitly
intended for more general review, and they deserve further elaboration.
Vision Document
The vision document provides a complete vision for the software system under development and.
supports the contract between the funding authority and the development organization. A project vision is
meant to be changeable as understanding evolves of the requirements, architecture, plans, and
technology. A good vision document should change slowly. Figure 6-9 provides a default outline for a
vision document.
Architecture Description
The architecture description provides an organized view of the software architecture under development.
It is extracted largely from the design model and includes views of the design, implementation, and
deployment sets sufficient to understand how the operational concept of the requirements set will be
achieved. The breadth of the architecture description will vary from project to project depending on many
factors. Figure 6-10 provides a default outline for an architecture description.
Result:
Thus,Defining the design activities along with necessary artifacts using Design Document is
completed successfully.
Experiment no 17 Reverse Engineer any object-oriented code to
Date an appropriate class and object diagrams.
Aim:
Reverse Engineer any object-oriented code to an appropriate class and object diagrams.
1. Introduction
Computational research has been referred to as the third pillar of scientific and engineering research,
along with experimental and theoretical research [1]. Computational science and engineering (CSE)
researchers develop software to simulate natural phenomena that cannot be studied experimentally or to
process large amounts of data. CSE software has a large impact on society as it is used by researchers
to study critical problems in a number of important application domains, including weather forecasting,
astrophysics, construction of new physical materials, and cancer research [2]. For example, US
capabilities in science and engineering are frequently called upon to address urgent challenges in
national and homeland security, economic competitiveness, health care, and environmental protection
[3]. Recently the software engineering (SE) community has become more interested in the development
of software for CSE research.
In this critical type of software, Fortran is still a very widely used programming language [4]. Due to the
growing complexity of the problems being addressed through CSE, the procedural programming style
available in a language like Fortran 77 is no longer sufficient. Many developers have applied the object-
oriented programming (OOP) paradigm to effectively implement the complex data structures required by
CSE software. In the case of Fortran developers, this OOP paradigm was first emulated following an
object-based approach in Fortran 90/95 [5–7]. By including full support for OOP constructs, the Fortran
2003 language standard influenced the advent of several CSE packages [8–12].
One of the greatest challenges faced by CSE developers is the ability to effectively maintain their
software over its generally long lifetime [13]. This challenge implies high development and maintenance
costs during a software system’s lifetime. The difficulty of the maintenance process is affected by at least
three factors. First, most CSE developers are not formally trained in SE. Second, some existing SE tools
are difficult to use in CSE development. In general, CSE developers request tools to accommodate
documentation, correctness testing, and aid in design software for testability. Unfortunately, most SE
tools were not designed to be
used in the context of CSE development. Third, CSE software often lacks the formal documentation
necessary to help developers understand its complex design. This lack of documentation presents an
even larger software maintenance challenge. The objective of this work is to provide tool support for
automatically extracting UML class diagrams from OO Fortran code.
To address this objective, we developed and evaluated the ForUML tool. ForUML uses a reverse
engineering approach to transform Fortran source code into UML models. To ensure flexibility, our
solution uses a Fortran parser that does not depend on any specific Fortran compiler and generates
output in the XML Metadata Interchange (XMI) format. The tool then displays the results of the
analysis (the UML class diagram) using the ArgoUML (http://argouml.tigris.org/)
modeling tool. We evaluated the accuracy of ForUML using five CSE software packages that
use object-oriented features from the Fortran 95, 2003, and 2008 compiler standards. This paper extends
the workshop paper [14] by providing more background information and more details on the
transformation process in ForUML. Additionally, this paper includes a discussion of the audience
feedback during the Workshop on Software Engineering for High Performance Computing in
Computational Science and Engineering (SE-HPCCSE’13).
The contributions of this paper are as follows:(i)the ForUML tool that will help CSE developers extract
UML design diagrams from OO Fortran code to enable them make good decisions about software
development and maintenance tasks;(ii)description of the transformation process used to develop
ForUML, which may help other tool authors create tools for the CSE community;(iii)the results of the
evaluation and our experiences using ForUML on real CSE projects to highlight its benefits and
limitations;(iv)workshop feedback that should help SE develop practices and tools that are suitable for
use in the CSE domain.
The rest of this paper is organized as follows. Section 2 provides the background concepts related to this
work. Section 3 presents ForUML. Section 4 describes the evaluation and our experiences with ForUML.
Section 5 discusses the evaluation results and limitations of ForUML. Finally, Section 6 draws
conclusions and presents future work.
2. Related Work
This section first describes important CSE characteristics that impact the development of tool support.
Next, it presents two important concepts used in the development of ForUML, reverse engineering and
OO Fortran. Finally, because one of the benefits of using ForUML is the ability to recognize and maintain
design patterns, the last subsection provides some background on design patterns.
This section highlights three characteristics of CSE software development that differentiate it from
traditional software development. First, CSE developers typically have a strong background in the
theoretical science but often do not have formal training about SE techniques that have proved
successful in other software areas. More specifically, because the complexity of the problems addressed
by CSE generally requires a domain expert (e.g., a Ph.D. in physics or biology) to even understand the
problem, and that domain expert generally must learn how to develop software [15]. Wilson [16] stated
that one of the reasons why scientists tend to be less effective programmers is that they do not have the
time to learn yet another programming language and software tool. Furthermore, the CSE culture,
including most funding agencies, tends to view software as the means to a new scientific discovery rather
than as a CSE instrument that must be carefully engineered, maintained, and extended to enable novel
science.
Second, some SE tools are difficult to use in a CSE development environment [17]. CSE applications are
generally developed with software tools that are crude compared to those used today in the commercial
sector. Researchers and scientists seek easy-to-use software that enables analysis of complex data and
visualization of complicated interactions. Consequently, CSE developers often have trouble identifying
and using the most appropriate SE techniques for their work, in particular as it relates to reverse
engineering tasks. Scientists interested in scientific research cannot spend most of their time
understanding and using complex software tools. The limited interoperability of the tools and their
complexity are major obstructions to their adaptation by the CSE community. For example, Storey noted
that CSE developers who lack
formal SE training need help with program comprehension when they are developing complex
applications [18]. To address this problem, the SE community must develop tools that satisfy the needs of
CSE developers. These tools must allow the developers to easily perform important reverse engineering
tasks. More specifically, a visualization-based tool is appropriate for program comprehension in complex
object-oriented applications [19].
Third, CSE software typically lacks adequate development-oriented documentation [20]. In fact,
documentation for CSE software often exists only in the form of subroutine library documentation. This
documentation is usually quite clear and sufficient for library users, who treat the library as a black box,
but not sufficient for developers who need to understand the library in enough detail to maintain it. The
lack of design documentation in particular leads to multiple problems. Newcomers to a project must
invest a lot of effort to understand the code. There is an increased risk of failure when developers of
related systems cannot correctly understand how to interact with the subject system. In addition, the lack
of documentation makes refactoring and maintenance difficult and error prone. CSE software typically
evolves over many years and involves multiple developers [21], as functionality and capabilities are
added or extended [22]. The developers need to be able to determine whether the evolved software
deviates from the original design intent. To ease this process, developers need tools that help them
identify changes that affect the design and determine whether those changes have undesired effects on
design integrity. The availability of appropriate design documentation can reduce the likelihood of poor
choices during the maintenance process.
Reverse engineering is a method that transforms source code into a model [23]. ForUML builds upon and
expands some existing reverse engineering work. The Dagstuhl middle meta model (DMM) is a schema
for describing the static structure of source code [24]. DMM supports reverse engineering by representing
models extracted from source code written in most common OOP languages. We applied the idea of
DMM to OO Fortran.
The transformation process in ForUML is based on the XMI format, which provides a standard method of
mapping an object model into XML. XMI is an open standard that allows developers and software
vendors to create, read, manage, and generate XMI tools. Transforming
the model (Fortran code) to XMI requires use of the model driven architecture (MDA) technology [25], a
modeling standard developed by the object management group (OMG) [26]. MDA aims to increase
productivity and reuse by using separation of concerns and abstraction. A platform independent model
(PIM) is an abstract model that contains the information to drive one or more platform specific models
(PSMs), including source code, data definition language (DDL), XML, and other outputs specific to the
target platform. MDA defines transformations that map from PIMs to PSMs.
The basic idea of using an XMI file to maintain the metadata for UML diagrams was drawn from four
reverse engineering tools. Alalfi et al. developed two tools that use XMI to maintain the metadata for the
UML diagrams: a tool that generates UML sequence diagrams for web application code [27] and a tool to
create UML-entity relationship diagrams for the structured query language (SQL) [28]. Similarly,
Korshunova et al. [29] developed CPP2XMI to extract various UML diagrams from C++ source code.
CPP2XMI generates an XMI document that describes the UML diagram, which is then displayed
graphically by DOT (part of the Graphviz framework) [30]. Duffy and Malloy [31] created libthorin, a tool to
convert C++ source code into UML diagrams. Prior to converting an XMI document into a UML diagram,
libthorin requires developers to use a third party compiler to compile code into the DWARF
(http://www.dwarfstd.org/), which is a debugging file format used to support source level debugging. In
terms of Fortran, DWARF only supports Fortran 90, which does not include all object-oriented features.
This limitation may cause compatibility problems with different Fortran compilers. Conversely, ForUML is
compiler independent and able to generate UML for all types of OO Fortran code.
Doxygen is a documentation tool that can use Fortran code to generate either a simple, textual
representation with procedural interface information or a graphical representation. The only OOP class
relationship Doxygen supports is inheritance. With respect to our goals, Doxygen has two primary
limitations. First, it does not support all OOP features within Fortran (e.g., type-bound procedures and
components). Second, the diagrams generated by Doxygen only include class names and class
relationships but do not contain other important information typically included in UML class diagrams
(e.g., methods, properties). Our work expands upon
Doxygen by adding support for OO Fortran and by generating UML diagrams that include all relevant
information about the included classes (e.g., properties, methods, and signatures).
There are a number of available tools (both open source and commercial) that claim to transform OO
code into UML diagrams (e.g., Altova UModel, Enterprise Architect, StarUML, and ArgoUML). However,
in terms of our work, these tools do not support OO Fortran. Although they cannot directly create UML
diagrams from OO Fortran code, most of these tools are able to import the metadata describing UML
diagrams (i.e., the XMI file) and generate the corresponding UML diagrams. ForUML takes advantages of
this feature to display the UML diagrams described by the XMI files it generates from OO Fortran code.
This previous work has contributed significantly to the reverse engineering tools of traditional software.
ForUML specifically offers a method to reverse engineering code implemented with modern Fortran,
including features in the Fortran 2008 standard. Moreover, the tool was deliberately designed to support
important features of Fortran, such as coarrays, procedure overloading, and operator overloading.
A design pattern is a generic solution to a common software design problem that can be reused in similar
situations. Design patterns are made of the best practices drawn from various sources, such as building
software applications, developer experiences, and empirical studies. Generally, we can classify the
design patterns of the software into classical and novel design patterns. The 23 classical design patterns
were introduced by the “Gang of Four” (GoF) [38]. Subsequently, software developers and researchers
have proposed a number of novel design patterns targeted at particular domains, for example, parallel
programming [39, 40].
In general, a design pattern includes a section known as intent. Intent is “a short statement that answers
the following questions: What does the design pattern do? What is its rationale and intent? What
particular design issues or problem does it address?” [38]. For example, the intent of the template
method pattern requires that developers define the skeleton of an algorithm in an operation, deferring
some steps to subclasses. Template method lets subclasses redefine certain steps of an algorithm
without changing the algorithm’s
structure. When using design patterns, developers have to understand the intent of each design pattern
to determine whether the design pattern could provide a good solution to a given problem.
Several researchers have proposed design patterns for computational software implemented with
Fortran. For example, Weidmann [41] implemented design patterns to enable sparse matrix computations
on NVIDIA GPUs. They then evaluated the benefits of the implementation and reported that the design
patterns provided a high level of maintainability and performance. Rouson et al. [12] proposed three new
design patterns, called multiphysics design patterns, to implement the differential equations, which are
integrated into multiphysics and numerical software. These new design patterns include the semidiscrete,
surrogate and template class patterns. Markus demonstrated how some well-known design patterns
could be implemented in Fortran 90, 95, and 2003 [42, 43]. Similarly, Decyk et al. [4] proposed the factory
pattern in Fortran 95 based on CSE software. These researchers presented the proposed pattern
implementation in their particle-in-cell (PIC) methods [44] in plasma simulation software. Decyk and
Gardner [45] also described a way to implement the strategy, template, abstract factory, and facade
patterns in Fortran 90/95.
3. ForUML
This section describes the rationale and benefits of developing ForUML and details the transformation
process used by ForUML.
The CSE characteristics described in Section 2.1 indicate that CSE developers could benefit from a tool
that creates system documentation with little effort. The SE community typically uses reverse engineering
to address this problem.
Although there are a number of reverse engineering tools [46] (see Section 2.2), those tools that can be
applied to OO Fortran do not provide the full set of documentation required by developers. Therefore, we
identified the need for a tool that automatically reverses engineers OO Fortran code into the necessary
UML design documentation.
This work is primarily targeted at CSE developers who develop OO Fortran. The ForUML tool will provide
the following benefits to the CSE community.(1)The extracted UML class diagrams should support
software maintenance and evolution and help maintainers ensure that the original design intentions are
satisfied.(2)The developers can use the UML diagrams to illustrate software design concepts to their
team members. In addition, UML diagrams can help developers visually examine relationships among
objects to identify code smells [47] in software being developed.(3)Because SE tools generally improve
productivity, ForUML can reduce the training time and learning curve required for applying SE practices
in CSE software development. For instance, ForUML will help developers perform refactoring activities by
allowing them to evaluate the results of refactoring using the UML diagrams rather than inspecting the
code manually.
Since Fortran 2003 provides all of the concepts of OOP, tools like ForUML can help to place Fortran and
other OOP program languages on equal levels.
3.2. Transformation Process
The primary goal of ForUML is to reverse engineering UML class diagrams from Fortran code. By
extracting a set of source files, it builds a collection of objects associated with syntactic entities and
relations. Object-based features were first introduced in the Fortran 90 language standard. Accordingly,
ForUML supports all versions of Fortran 90 and later, which encompasses most platforms and compiler
vendors. We implemented ForUML using Java Platform SE6 so that it could run on any client computing
systems.
The UML object diagram in Figure 1 expresses the model of the Fortran language. The module object
corresponds to Fortran modules, that is, containers holding type and procedure objects. The type-bound
procedure and component objects are modeled with a composition association to instances of type. Both
the procedure and type-bound procedure objects are composed of argument and statement objects. The
generalization relation with base type object leads to the parents in the inheritance hierarchy. When
generating the class diagram in ForUML, we consider only the objects inside the dashed-line box that
separates object-oriented entities from the module-related entities.
Figure 2 provides an overview of the transformation process embodied in ForUML, comprising the
following steps: parsing, extraction, generating, and importing. The following subsections discuss each
step in more detail.
Result:
Thus, Reverse Engineer any object-oriented code to an appropriate class and object diagrams is
completed successfully.
Experiment no 18 Unit Testing with
Date JUnit
Aim:
It is an open-source testing framework for java programmers. The java programmer can create
test cases and test his/her own code.
To perform unit testing, we need to create test cases. The unit test case is a code which ensures
that the program logic works as expected.
21.9M
303
Triggers in SQL (Hindi)
The org.junit package contains many interfaces and classes for junit testing such as Assert, Test,
Before, After etc.
If you execute the test cases manually without any tool support, it is known as manual testing. It
is time consuming and less reliable.
2) Automated Testing
If you execute the test cases by tool support, it is known as automated testing. It is fast and more
reliable.
The Junit 4.x framework is annotation based, so let's see the annotations that can be used while
writing the test cases.
@Test(timeout=1000) annotation specifies that method will be failed if it takes longer than
1000 milliseconds (1 second).
@BeforeClass annotation specifies that method will be invoked only once, before starting all the
tests.
@Before annotation specifies that method will be invoked before each test.
@After annotation specifies that method will be invoked after each test.
@AfterClass annotation specifies that method will be invoked only once, after finishing all the
tests.
Assert class
The org.junit.Assert class provides methods to assert the program logic.
Methods of Assert class
Let's write the logic to find the maximum number for an array.
1. package com.javatpoint.logic;
2. public class Calculation {
3.
4. public static int findMax(int arr[]){
5. int max=0;
6. for(int i=1;i<arr.length;i++){
7. if(max<arr[i])
8. max=arr[i];
9. }
10. return max;
11. }
12. }
Here, we are using JUnit 4, so there is no need to inherit TestCase class. The main testing code is
written in the testFindMax() method. But we can also perform some task before and after each
test, as you can see in the given program.
1. package com.javatpoint.testcase;
2.
3. import static org.junit.Assert.*;
4. import com.javatpoint.logic.*;
5. import org.junit.Test;
6.
7. public class TestLogic {
8.
9. @Test
10. public void testFindMax(){
11. assertEquals(4,Calculation.findMax(new int[]{1,3,4,2}));
12. assertEquals(-1,Calculation.findMax(new int[]{-12,-1,-3,-4,-2}));
13. }
14. }
download this example
To run this example, right click on TestLogic class -> Run As -> 1Junit Test.
Output:Assertion Error
As you can see, program logic to find the maximum number for the given array is not correct
because it doesn't return -1 in case of negative values. The correct program logic is given below:
1. package com.javatpoint.logic;
2. public class Calculation {
3.
4. public static int findMax(int arr[]){
5. int max=arr[0];//arr[0] instead of 0
6. for(int i=1;i<arr.length;i++){
7. if(max<arr[i])
8. max=arr[i];
9. }
10. return max;
11. }
12. }
If you run the junit program again, you will see the following output.
Another example of Junit framework
Write the program code
1. package com.javatpoint.logic;
2. public class Calculation {
3. //method that returns maximum number
4. public static int findMax(int arr[]){
5. int max=0;
6. for(int i=1;i<arr.length;i++){
7. if(max<arr[i])
8. max=arr[i];
9. }
10. return max;
11. }
12. //method that returns cube of the given number
13. public static int cube(int n){
14. return n*n*n;
15. }
16. //method that returns reverse words
17. public static String reverseWord(String str){
18.
19. StringBuilder result=new StringBuilder();
20. StringTokenizer tokenizer=new StringTokenizer(str," ");
21.
22. while(tokenizer.hasMoreTokens()){
23. StringBuilder sb=new StringBuilder();
24. sb.append(tokenizer.nextToken());
25. sb.reverse();
26.
27. result.append(sb);
28. result.append(" ");
29. }
30. return result.toString();
31. }
32. }
Write the test case
1. package com.javatpoint.testcase;
2.
3. import static org.junit.Assert.assertEquals;
4. import org.junit.After;
5. import org.junit.AfterClass;
6. import org.junit.Before;
7. import org.junit.BeforeClass;
8. import org.junit.Test;
9. import com.javatpoint.logic.Calculation;
10.
11. public class TestCase2 {
12.
13. @BeforeClass
14. public static void setUpBeforeClass() throws Exception {
15. System.out.println("before class");
16. }
17. @Before
18. public void setUp() throws Exception {
19. System.out.println("before");
20. }
21.
22. @Test
23. public void testFindMax(){
24. System.out.println("test case find max");
25. assertEquals(4,Calculation.findMax(new int[]{1,3,4,2}));
26. assertEquals(-2,Calculation.findMax(new int[]{-12,-3,-4,-2}));
27. }
28. @Test
29. public void testCube(){
30. System.out.println("test case cube");
31. assertEquals(27,Calculation.cube(3));
32. }
33. @Test
34. public void testReverseWord(){
35. System.out.println("test case reverse word");
36. assertEquals("ym eman si nahk",Calculation.reverseWord("my name is khan");
37. }
38. @After
39. public void tearDown() throws Exception {
40. System.out.println("after");
41. }
42.
43. @AfterClass
44. public static void tearDownAfterClass() throws Exception {
45. System.out.println("after class");
46. }
47.
48. }
Output:before class
before
test case find max
after
before
test case cube
after
before
test case reverse word
after
after class
Result:
Aim:
To, Test the percentage of code to be tested by unit test using any code coverage tools
Code coverage is performed to verify the extent to which the code has been executed.
Code coverage tools use static instrumentation in which statements monitoring code
execution are inserted at necessary junctures in the code.
Now, adding instrumentation code does result in increased execution time and code
length. But the increase is more than justified in light of the information that the tester
gets because of the extra code.
Code coverage scripts generate a report that details how much of the application code
has been executed. This is a white-box testing technique.
Code coverage is primarily performed at the unit testing level. Unit tests are created by
developers, thus giving them the best vantage from which to decide what tests to
include in unit testing. At this point, a number of questions arise:
As development progresses, new features and fixes are added to the codebase.
Obviously, the test code must be changed to stay updated with these changes. Testing
standards established at the beginning of the project must also be maintained
throughout subsequent release cycles. Code coverage can ensure that these standards
are maintained so that only the optimal quality code is pushed to production.
• Function Coverage: This ensures that all necessary functions are tested. It
also includes testing functions with different input parameters to test the logic in the
functions.
• Loop Coverage: This ensures that every loop in the source code is executed
at least once. Certain loops may be executed based on results achieved at runtime. One
must be careful to test such loops so as to fortify the code completely.
• Finite State Machine Coverage: This works based on the frequency of visits
from static states and other transactions. Finite state machine coverage is the most
complicated form of code coverage as it functions on the design of the software
structure.
Instrumentation Types
• Allows introduction of test cases: If already available test cases do not test
the software extensively enough, one can introduce their own test cases to establish
robust coverage.
• Easy elimination of dead code and errors: Let’s say some parts of the
entire codebase were not touched during code coverage, or there are sections of dead
or useless code. Code coverage allows easy removal of such code, thus improving the
efficiency of the entire code base.
What is Test Coverage?
Unlike code coverage, test coverage is a black-box testing technique. It monitors the
number of tests that have been executed. Test cases are written to ensure maximum
coverage of requirements outlined in multiple documents – FRS (Functional
Requirements Specification), SRS (Software Requirements Specification), URS (User
Requirement Specification), etc.
The test coverage report provides information about parts of the software where test
coverage is being implemented. Essentially, it provides information about the tests
executed on an application or website.
Test coverage can also be evaluated through different types of testing. However, the
type of tests that must be run depends on the business priorities of the testing team
and the organization behind them. For example, user-centric web apps prioritize UI/UX
tests over functional tests. Conversely, financial apps will prioritize usability and security
testing over all other tests.
In case, one wants to test their website for cross-browser compatibility or UI/UX bugs,
check this page out.
• Unit Testing: Performed at a unit level or module level. Bugs at this level
are widely different from issues encountered at the integration stage.
• Integration Testing: Also called system testing, since testing occurs on the
system level. These tests are performed once all software modules are integrated.
The purpose of test coverage varies depending on the level at which tests are
performed. It also depends on the type of software being tested. Additionally, test
coverage metrics for mobile phones would differ from those for website testing
o Quick Tip: Want to understand your website performance from another part of the
world? Check out how to test your websites from different countries or location
• It helps to detect the areas of test cases that are useless for the current
project. These cases are reported and can be eliminated to make the code lighter.
• It helps developers create additional test cases as required. These
additional test cases help ensure that test coverage is maximum.
Code coverage tools will use one or more criteria to determine how your code was
exercised or not during the execution of your test suite. The common metrics that you
might see mentioned in your coverage reports include:
• Function coverage: how many of the functions defined have been called.
• Branches coverage: how many of the branches of the control structures (if
statements for instance) have been executed.
• Line coverage: how many of lines of source code have been tested.
These metrics are usually represented as the number of items actually tested, the items
found in your code, and a coverage percentage (items tested / items found).
These metrics are related, but distinct. In the trivial script below, we have a Javascript
function checking whether or not an argument is a multiple of 10. We'll use that
function later to check whether or not 100 is a multiple of 10. It'll help understand the
difference between the function coverage and branch coverage. Coverage-tutorial.js
We can see that while our Function Coverage is 100%, our Branch Coverage is only 50%.
We can also see that the isntanbul code coverage tool isn't calculating a Condition
Coverage metric.
This is because when we run our script, the else statement has not been executed. If we
wanted to get 100% coverage, we could simply add another line, essentially another
test, to make sure that all branches of the if statement is used.
coverage-tutorial.js
function isMultipleOf10(x) { if (x % 10 == 0) return true; else return false; }
console.lo g(isMultipleOf10(100)); console.log(isMultipleOf10(34)); // This will make our
code execute the
A second run of our coverage tool will now show that 100% of the source is covered
thanks to our two console.log() statements at the bottom.
In this example, we were just logging results in the terminal but the same principal
applies when you run your test suite. Your code coverage tool will monitor the
execution of your test suite and tell you how much of the statements, branches,
functions and lines were run as part of your tests.
Find the right tool for your project
You might find several options to create coverage reports depending on the language(s)
you use. Some of the popular tools are listed below:
• Javascript: istanbul
• PHP: PHPUnit
• Python: Coverage.py
• Ruby: SimpleCov
Some tools like istanbul will output the results straight into your terminal while others
can generate a full HTML report that lets you explore which part of the code are lacking
coverage.
There's no silver bullet in code coverage, and a high percentage of coverage could still
be problematic if critical parts of the application are not being tested, or if the existing
tests are not robust enough to properly capture failures upfront. With that being said it
is generally accepted that 80% coverage is a good goal to aim for. Trying to reach a
higher coverage might turn out to be costly, while not necessary producing enough
benefit.
The first time you run your coverage tool you might find that you have a fairly low
percentage of coverage. If you're just getting started with testing it's a normal situation
to be in and you shouldn't feel the pressure to reach 80% coverage right away. The
reason is that rushing into a coverage goal might push your team to write tests that are
hitting every line of the code instead of writing tests that are based on the business
requirements of your application.
For instance, in the example above we reached 100% coverage by testing if 100 and 34
were multiples of 10. But what if we called our function with a letter instead of a
number?
to increase quickly your code coverage is to start by adding unit tests as, by definition,
they should help you make sure that your test suite is reaching all lines of code.
Soon you'll have so many tests in your code that it will be impossible for you to know
what part of the application is checked during the execution of your test suite. You'll
know what breaks when you get a red build, but it'll be hard for you to understand what
components have passed the tests.
This is where the coverage reports can provide actionable guidance for your team. Most
tools will allow you to dig into the coverage reports to see the actual items that weren't
covered by tests and then use that to identify critical parts of your application that still
need to be tested.
Result:
Thus, Test the percentage of code to be tested by unit test using any code coverage tools
is completed successfully.
Aim:
To, Define appropriate metrics for at least 3 quality attributes for any software
application of your interest
What Software Quality (Really) Is and the Metrics You Can Use to Measure It?
With such fast release times and more frequent releases, it’s easy to see how software
quality could suffer – the pressure to release faster could, in theory, increase the
chances of defects polluting production. However, this is not the case – the high-
performing IT companies that most often release software tend to experience 60 times
fewer failures.
You’ve probably already taken steps to speed up the release of software with DevOps or
Agile methodologies, but you must also understand what exactly software quality
entails if you want to release high-quality software regularly. It’s no good just speeding
up development – quality must be at the forefront of your objectives.
When you finish reading this post, you will have a more complete understanding of
software quality, the main factors that contribute to quality, and how to accurately
measure the quality of all software your company in particular develops with the help of
test metrics (see SeaLights’ test metrics learning section for a wider list of
recommended metrics).
Functional requirements specify what the software should do. Functional requirements
could be calculations, technical details, data manipulation and processing, or any other
specific function that defines what an application is meant to accomplish.
Non-functional requirements specify how the system should work. Also known as
“quality attributes” non-functional requirements include things like disaster recovery,
portability, privacy, security, supportability, and usability.
Several factors contribute to software quality. We’ll look at the important aspects of
software quality and some practical ways of measuring them so that you can ensure
every piece of code you deploy into production satisfies its requirements.
Note that most factors indicating software quality fit into the non-functional
requirements category. And, while it’s obviously important that software does what it’s
built to do, this is the bare minimum you would expect from any application. Let’s see
what it takes to aim higher.
The CISQ software quality model provides a good base for understanding software
quality. You can combine the quality aspects outlined in this model with other relevant
factors to get a holistic view of software quality.
The CISQ software quality model defines four important indicators of software quality:
• Reliability
• Performance efficiency
• Security
• Maintainability
Reliability is the risk of software failure and the stability of a program when exposed to
unexpected conditions. Reliable software has minimal downtime, good data integrity,
and no errors that directly affect users.
Performance efficiency refers to an application’s use of resources and how that affects
its scalability, customer satisfaction, and response times. Software architecture, source
code design, and individual architectural components all contribute to performance
efficiency.
Security assesses how well an application protects information against the risk of
software breaches. The quantity and severity of vulnerabilities found in a software
system are indicators of its security level. Poor coding and architectural weaknesses
often lead to software vulnerabilities.
Maintainability is the ease with which you can modify software, adapt it for other
purposes, or transfer it from one development team to another. Compliance with
software architectural rules and use of consistent coding across the application combine
to make software maintainable.
The CISQ model provides a good platform for understanding software quality, but you
can consider other aspects alongside CISQ to get a more holistic view of quality.
Rate of Delivery
Rate of delivery means how often new versions of software are shipped to customers.
Since a new software version typically comes with improvements that directly impact
users, you can infer that higher rates of delivery correspond to better quality software
for customers.
Testability
Quality software requires a high degree of testability. Finding faults in software with
high testability is easier, making such systems less likely to contain errors when shipped
to end users.
The harder it is to provide quality assurance, the tougher time you’ll have ensuring that
quality applications are deployed into production.
Usability
The user interface is the only part of the software visible to users, so it’s vital to have a
good UI. Simplicity and task execution speed are two factors that lead to a better UI.
Returning briefly to the functional and non-functional requirements that affect software
quality, usability is a non-functional requirement. Consider an airline booking system
that allows you to book flights (functional requirement). If that system is slow and
frustrating to use (non- functional requirement), then the software quality is low.
How to Measure Software Quality
Below are some examples of test metrics and methods for measuring the important
aspects of software quality. Efficient measuring and testing of your software for quality
is the only way to maximize the chances of releasing high-quality software in today’s
fast-paced development environments.
You can measure reliability by counting the number of high priority bugs found in
production. You can also use load testing, which assesses how well the software
functions under
ordinary conditions of use. It’s important to note that “ordinary conditions of use” can
vary between low loads and high loads—the point is that such environments are
expected.
Load testing is also useful for measuring performance efficiency. Stress testing is an
important variation on load testing used to determine the maximum operating capacity
of an application.
Stress testing is conducted by inundating software with requests far exceeding its
normal and expected patterns of use to determine how far a system can be pushed
before it breaks. With stress testing, you get insight into the recoverability of the
software when it breaks—ideally, a system that fails should have a smooth recovery.
You can measure security by assessing how long it takes to patch or fix software
vulnerabilities. You can also check actual security incidents from previous software
versions, including whether the system was breached and if any breaches caused
downtime for users. All previous security issues should, of course, be addressed in
future releases.
Counting the number of lines of code is a simple measure of maintainability—software
with more lines of code is harder to maintain, meaning changes are more likely to lead
to errors.
There are several detailed test metrics used to check the complexity of code, such as
cyclomatic complexity, which counts the amount of linearly independent paths through
a program’s source code.
The advice issued by NIST for cyclomatic complexity is that a value above 10 signifies a
potentially risky codebase in terms of possible defects. Software testing tools such as
Visual Studio can measure the cyclomatic complexity test metric for you.
You can check the rate of delivery by counting the number of software releases. Another
measure is the number of “stories” or user requirements shipped to the user.
You can test the GUI to make sure it’s simple and not frustrating for end users. The
problem is that GUI testing is complex and time-consuming – there are many possible
GUI operations and sequences that require testing in most software. And that means it
takes a long time to design test cases.
The complexity of GUI testing competes with the objective of releasing software quickly,
which has necessitated the implementation of automated testing. Several test suites
that completely simulate user behavior are available. Consider Abbot, eggPlant, and
Selenium.
The goal of tracking and analyzing software metrics is to determine the quality of the
current product or process, improve that quality and predict the quality once the
software development project is complete. On a more granular level, software
development managers are trying to:
• Manage workloads
• Reduce overtime
• Reduce costs
These goals can be achieved by providing information and clarity throughout the
organization about complex software development projects. Metrics are an important
component of quality assurance, management, debugging, performance, and estimating
costs, and they’re valuable for both developers and development team leaders:
• Managers can use software metrics to identify, prioritize, track and communicate
any issues to foster better team productivity. This enables effective management and
allows assessment and prioritization of problems within software development projects.
The sooner managers can detect software problems, the easier and less-expensive the
troubleshooting process.
Software metrics offer an assessment of the impact of decisions made during software
development projects. This helps managers assess and prioritize objectives and
performance goals.
Result:
To, Defining appropriate metrics for quality attributes for any software application of
your interest is completed successfully.