Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
9 views51 pages

SE Module-5 Notes

Software Project Management (SPM) involves planning, organizing, and controlling resources to ensure software projects are completed on time, within budget, and meet quality standards. Key responsibilities of a Project Manager include defining goals, estimating resources, and managing risks, while project planning activities encompass Work Breakdown Structure, effort estimation, scheduling, and resource allocation. Software Configuration Management (SCM) is crucial for managing changes and ensuring consistency throughout the software development lifecycle, utilizing tools like Git and SVN for version control and collaboration.

Uploaded by

bennyabhishek45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views51 pages

SE Module-5 Notes

Software Project Management (SPM) involves planning, organizing, and controlling resources to ensure software projects are completed on time, within budget, and meet quality standards. Key responsibilities of a Project Manager include defining goals, estimating resources, and managing risks, while project planning activities encompass Work Breakdown Structure, effort estimation, scheduling, and resource allocation. Software Configuration Management (SCM) is crucial for managing changes and ensuring consistency throughout the software development lifecycle, utilizing tools like Git and SVN for version control and collaboration.

Uploaded by

bennyabhishek45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

MMC204 - SOFTWARE ENGINEERING

MODULE-5- PROJECT MANAGEMENT

Software Project Management


Definition
Software Project Management (SPM) is the process of planning, organizing, leading,
and controlling resources, procedures, and protocols to successfully complete software
projects. It ensures that the software is delivered on time, within budget, and with the
required functionality and quality.

Key Objectives of Software Project Management

1. Deliver software that meets user requirements.

2. Optimize use of time, resources, and effort.

3. Identify and manage project risks effectively.

4. Ensure the project remains within scope, schedule, and budget.

5. Maintain product quality and customer satisfaction.

Responsibilities of a Software Project Manager

A Project Manager is responsible for:

 Defining project goals and success criteria.

 Estimating effort, cost, and time requirements.

 Creating detailed project plans using planning tools.

 Assigning resources like team members, tools, and infrastructure.

 Managing risks and resolving conflicts.

Page 1
 Communicating with stakeholders and clients.

 Monitoring progress and ensuring quality at each stage.

Project Planning Activities

1. Work Breakdown Structure (WBS):


WBS is the process of breaking down the entire project into smaller, manageable tasks
or components. This helps in clear understanding of work allocation and simplifies
estimation and scheduling.
Example: In an Online Bookstore project, a high-level task like "User Management"
can be broken into "Login/Signup", "Profile Editing", and "Password Recovery".

2. Effort Estimation:
Effort estimation helps predict the amount of effort, time, and resources needed to
complete the project. Common estimation methods include:

 Lines of Code (LOC)

 Function Point Analysis (FPA)

 COCOMO Model (explained in detail below)

COCOMO Model (Constructive Cost Model)

COCOMO, developed by Barry Boehm, is a mathematical model used to estimate the


effort (in person-months), development time (in months), and cost of software projects
based on the size of the project.

Types of COCOMO:

1. Basic COCOMO – Provides rough estimates based on the project size in KLOC
(thousands of lines of code).

Page 2
2. Intermediate COCOMO – Adds various cost drivers like team experience, tool
support, etc.

3. Detailed COCOMO – Considers phase-wise breakdown and all cost drivers for
greater accuracy.

Basic COCOMO Equations:

Effort (E) = a × (KLOC)^b


Development Time (TDEV) = c × (Effort)^d

The constants a, b, c, and d vary depending on the project type:

 Organic (simple projects): a = 2.4, b = 1.05, c = 2.5, d = 0.38

 Semi-detached (intermediate projects): a = 3.0, b = 1.12, c = 2.5, d = 0.35

 Embedded (complex projects): a = 3.6, b = 1.20, c = 2.5, d = 0.32

Example:
Estimate the effort and time required to build a simple (organic) software project of
32,000 lines of code (32 KLOC).

 Effort (E) = 2.4 × (32)^1.05 = approximately 91.05 person-months

 Development Time (TDEV) = 2.5 × (91.05)^0.38 = approximately 16.02


months

So, a 32 KLOC project would take around 91 person-months of effort and 16 months
of calendar time.

3. Schedule Creation

Schedule creation involves determining timelines, setting milestones, and identifying


dependencies between tasks. Tools like Gantt charts help visualize task durations,
while techniques like PERT (Program Evaluation Review Technique) and CPM
(Critical Path Method) are used to identify the minimum project duration and critical
paths.

Page 3
4. Resource Allocation
This involves assigning appropriate team members, tools, and technologies to tasks.
Effective resource allocation ensures that team members are not overburdened or
underutilized. It also includes budgeting for software tools, testing environments, and
hardware infrastructure.

5. Progress Tracking

Progress tracking helps ensure that the project stays on schedule. This involves:

 Setting milestones and deadlines

 Reviewing work completed versus planned

 Using status reports and performance metrics

 Conducting regular team meetings to track progress and resolve issues

Common tools for tracking include MS Project, JIRA, Trello, and other project
management platforms.

Software Configuration Management (SCM)

SCM is a discipline within software engineering that systematically manages and


controls changes to software systems throughout their development and maintenance.
It ensures the integrity, consistency, and traceability of software artifacts across
different phases of the project.

Why SCM is Important

SCM prevents errors such as overwriting work during collaboration, ensures


consistent versions, maintains historical records of changes, and allows teams to roll
back to previous versions when needed. It supports team collaboration, reduces
rework, and ensures accountability.

Core SCM Activities

Page 4
1. Configuration Identification
This activity involves identifying and labelling all configuration items in a software
project such as source code, requirement documents, design specifications, and test
cases. Each item is assigned a unique identifier, enabling traceability and organized
versioning.

2. Version Control
Version control is the process of managing changes to files over time. It keeps track of
every modification, who made it, and when it was made. Tools such as Git and SVN
help manage multiple versions, support collaboration, and allow merging, branching,
and reverting changes.

3. Change Control
This activity ensures that all changes to software are introduced in a controlled and
coordinated manner. It involves submitting change requests, evaluating their impact,
approving or rejecting them, and documenting the reasons and decisions.

4. Configuration Audits and Status Accounting


Configuration audits are performed to ensure that configuration items are complete,
correct, and consistent. Status accounting involves keeping records of all
configuration items, their versions, and the changes made over time.

Tools for SCM

Popular Tools

 Git

 CVS

 SVN

Enterprise Tools

Page 5
 Rational ClearCase

 Surround SCM

 Seapine

 Vesta

These tools support automated versioning, branching, merging, and auditing, which
are essential for managing software in collaborative and enterprise environments.

Advantages of Software Configuration Management (SCM)

1. Version Control:
Maintains multiple versions of software components, making it easy to retrieve
or roll back changes.

2. Improved Collaboration:
Enables multiple developers to work simultaneously without conflict.

3. Change Tracking and Auditing:


All changes are recorded, ensuring accountability and traceability.

4. Reduced Errors and Conflicts:


Prevents code overwrites and inconsistency during collaborative development.

5. Faster Issue Resolution:


Identifies when and where a defect was introduced using version history.

6. Reliable Releases:
Ensures that only verified and approved changes are released.

7. Enhanced Productivity:
Automation tools (like Git, SVN) streamline tracking, branching, and merging
tasks.

Disadvantages of Software Configuration Management (SCM)

1. Learning Curve:
New users may find tools like Git difficult to learn initially.

2. Tool Complexity:

Page 6
Enterprise SCM tools may be complex and require dedicated training.

3. Time-Consuming Audits:
Performing configuration audits and reviews can be time-intensive.

4. Increased Overhead:
Strict control mechanisms may slow down rapid development if not balanced.

5. Dependency on Tools:
SCM becomes ineffective if the tools are not used consistently across the team.

Project Scheduling

Definition
Project scheduling is the process of identifying, organizing, and sequencing project
tasks along with assigning timelines, resources, and milestones to ensure smooth
execution and timely delivery of a software product.

Goals of Project Scheduling

 Define the complete set of project tasks.

 Estimate the duration of each task.

 Assign responsibilities for each task.

 Identify task dependencies (which task must come before or after another).

 Predict total project duration and estimate the completion date.

Importance of Scheduling

Proper scheduling ensures that the project progresses as planned, helps monitor team
workload, prevents delays, and avoids last-minute bottlenecks. Poor scheduling can
lead to resource conflicts, increased costs, missed deadlines, and lower quality output.

Page 7
Scheduling Techniques

1. Work Breakdown Structure (WBS)


WBS is a hierarchical decomposition of the project into smaller, manageable
components or tasks. It simplifies planning, estimation, and responsibility assignment.
Each task in the WBS can be individually scheduled and tracked.

2. Gantt chart
A Gantt chart is a horizontal bar chart used in project scheduling to represent tasks
along a timeline. Each task is displayed as a bar, where the length of the bar represents
its duration and its position reflects its start and end dates. It visually shows task
dependencies, overlaps, milestones, and progress.

Gantt charts are especially useful for tracking the progress of tasks, showing which
ones are in progress, completed, or yet to start. They are commonly used in meetings
and status reports to provide stakeholders a clear overview of the project’s state.

Example - Library Management System (LMS) Project:


Tasks:

1. Requirement Gathering – 3 days

2. System Design – 4 days (after requirements)

3. Implementation – 5 days (after design)

4. Testing – 3 days (after implementation)

5. Deployment – 2 days (after testing)

Page 8
Gantt chart:

A horizontal bar chart showing tasks across 17 days. Tasks are listed on the Y-axis,
and time (days) on the X-axis. Each task is represented by a horizontal bar that spans
from its start to end date. Progress bars are shaded to show completion. Arrows
indicate dependencies between tasks. Milestones like "Design Complete" and "Go
Live" are marked with vertical lines or markers.

Diagram Description:
 Each bar represents one task, showing start and end dates clearly.

 Requirement Gathering is fully completed and shown as a fully shaded bar.

 System Design starts after Requirement Gathering and is shown as 50%


complete (partially shaded).

 Implementation, Testing, and Deployment follow sequentially, with no


progress indicated yet.

 Arrows between tasks show dependencies (e.g., Design must finish before
Implementation starts).

Page 9
 Vertical markets are used to indicate project milestones: "Design Complete" at
the end of System Design and "Project Go Live" at the end of Deployment.

 This visual arrangement helps identify the timeline, overlaps, and critical
sequencing of tasks.

Gantt chart Overview

The Gantt chart visually represents five sequential tasks in the development of a
Library Management System (LMS), plotted over a time axis (in days). Each task is
represented as a horizontal bar, showing the start date, duration, and end date. The
chart includes progress shading, milestone indicators, and task dependencies.

Task Breakdown

1. Requirement Gathering

o Duration: 3 days (Day 1 to Day 3)

o Progress: Fully completed (100%)

o No dependencies

o Bar is fully shaded indicating task completion.

2. System Design

o Duration: 4 days (Day 4 to Day 7)

o Starts after Requirement Gathering

o Progress: 50% completed (half-shaded bar)

o Milestone: “Design Complete” marked at Day 7

3. Implementation

o Duration: 5 days (Day 8 to Day 12)

o Starts after Design

o Not started yet (empty/ unshaded bar)

Page 10
o Dependency arrow links it from System Design

4. Testing

o Duration: 3 days (Day 13 to Day 15)

o Starts after Implementation

o No progress yet

o Dependency arrow from Implementation to Testing

5. Deployment

o Duration: 2 days (Day 16 to Day 17)

o Starts after Testing

o No progress yet
o Milestone: “Project Go Live” marked at Day 17

Timeline Axis

 The X-axis shows a continuous time scale from Day 1 to Day 17

 The Y-axis lists tasks from top to bottom in the order of execution

 Vertical lines indicate day breaks or important checkpoints

Task Dependencies

 Arrows clearly show which tasks follow others (e.g., Design depends on
Requirements, Implementation on Design, etc.)

 This helps visualize sequential flow and identify the critical path

Milestones

 Small diamond shapes or vertical markers represent:

o Design Complete (after Day 7)

Page 11
o Project Go Live (end of Day 17)

3. PERT Chart

Explanation: PERT (Program Evaluation and Review Technique) is a statistical


project management tool used when task durations are uncertain. It estimates expected
time using three time values:

 Optimistic time (O): Minimum possible time if everything goes well

 Most likely time (M): Expected duration under normal conditions

 Pessimistic time (P): Maximum possible time under worst-case scenario

The Expected Time (TE) is calculated as: TE = (O + 4M + P) / 6 this formula give


more weight to the most likely time.

Example - Library Management System (Testing Task):

 Optimistic (O) = 2 days

 Most Likely (M) = 4 days

 Pessimistic (P) = 6 days

 Expected Time TE = (2 + 4×4 + 6) / 6 = 24 / 6 = 4 days

Page 12
A network diagram with nodes representing events and arrows for activities. Each
arrow includes task name and TE (expected time). Tasks flow sequentially from
"Start" to "End".

Page 13
Diagram Description:
 Circular nodes represent milestones (e.g., completion of a task).

 Arrows show tasks between milestones and include labels (Task name and
duration).

 Tasks proceed from Start → A → B → C → D → E → End:

o A = Requirement Gathering (TE = 3 days)

o B = System Design (TE = 4 days)

o C = Implementation (TE = 5 days)

o D = Testing (TE = 3 days)

o E = Deployment (TE = 2 days)

 The critical path is the longest path in this network and determines the shortest
completion time for the project.

 If there are parallel paths, slack time (float) is identified for non-critical tasks.

Purpose:

 Helps manage uncertain timelines

 Identifies earliest and latest task start times

 Supports estimation of overall project duration

4. CPM Chart

The Critical Path Method (CPM) is a deterministic scheduling technique used when
task durations are known and fixed. It helps identify the critical path, which is the
longest sequence of dependent tasks that determines the shortest time in which a
project can be completed.

In a CPM chart, each activity has:

 Earliest Start (ES) and Earliest Finish (EF)

Page 14
 Latest Start (LS) and Latest Finish (LF)

 Slack = LS − ES (or LF − EF): how much a task can be delayed without


affecting the project

Tasks on the critical path have zero slack — delaying any of them delays the project.

Example - Library Management System:

Task Description Duration Depends on

A Requirement Gathering 3 days —

B System Design 4 days A

C Implementation 5 days B

D Testing 3 days C

E Deployment 2 days D

Sequence: A → B → C → D → E

Critical Path Duration: 3 + 4 + 5 + 3 + 2 = 17 days

All tasks are on the critical path. Any delay in these will delay the entire project.

Page 15
Generated CPM Chart:

A linear flowchart where nodes represent milestones and arrows show task names

and durations. The entire path from A to E is marked as the critical path.

Diagram Description:

 Rectangular boxes represent each task, labeled with their name and duration.

 Arrows show the sequence from Task A to E.

 Each node can include ES, EF, LS, LF values (for advanced tracking).

 The critical path is highlighted to indicate zero slack.

Page 16
 Since there are no parallel tasks here, the entire workflow is the critical path.

Purpose:

 Helps focus on tasks that directly affect project completion

 Optimizes resource allocation

 Useful for schedule tracking and delay management in fixed-duration projects

5. Timeline or Bar Chart

A Timeline Chart (or Bar Chart) is a simplified version of a Gantt chart that visually
represents tasks along a horizontal time axis. Unlike Gantt charts, it doesn't show
progress, dependencies, or milestones — but provides a quick overview of task
durations and schedule.

Example - Library Management System:

Day Task

Day 1–3 Requirement Gathering

Day 4–7 System Design

Day 8–12 Implementation

Day 13–15 Testing

Day 16–17 Deployment

Page 17
Timeline/Bar Chart:

A simple horizontal chart showing each task as a colored bar aligned to the day it

occurs. The X-axis represents time, and Y-axis lists tasks.

Diagram Description:

 Y-axis lists tasks (A–E)

 X-axis shows project timeline (Day 1 to Day 17)

 Each task is represented as a single horizontal bar

 Bars are drawn proportionally to task duration

 No arrows, milestones, or overlaps are shown

Page 18
 Provides a clean and high-level schedule view

Purpose:

 Ideal for small teams and academic project planning

 Gives a quick snapshot of when each task occurs

 Suitable for simple overviews or presentation visuals

Advantages of Project Scheduling

1. Clear Roadmap:
Defines what needs to be done, by whom, and when.

2. Resource Optimization:
Allocates resources efficiently, avoiding under- or over-utilization.

3. Improved Time Management:


Identifies critical paths and task dependencies, preventing delays.

4. Progress Monitoring:
Gantt and PERT/CPM charts help track progress and forecast completion.

5. Informed Decision-Making:
Enables better planning and risk assessment with real-time scheduling data.

6. Milestone Tracking:
Helps in evaluating project status against deadlines.

Disadvantages of Project Scheduling

1. Time-Consuming Planning:
Preparing detailed schedules like Gantt or PERT charts can be complex and
lengthy.

2. Inflexibility:
Rigid schedules may not easily adapt to changing project requirements.

3. Over-Reliance on Estimates:
Inaccurate time/cost estimates can make the schedule unrealistic.

4. Tool Dependency:
Requires specialized software (like MS Project) and trained users.

Page 19
5. Not Suitable for Small Projects:
For very small or short-term projects, formal scheduling may be unnecessary
overhead.

What is DevOps?
DevOps is a set of practices, cultural philosophies, and tools that integrates
software development (Dev) and IT operations (Ops). It aims to shorten the
development lifecycle, improve software quality, and deliver applications and
services at high velocity.

Objectives of DevOps
 Deliver software faster and more reliably.

 Support continuous integration (CI) and continuous delivery (CD).

 Emphasize automation across the entire software lifecycle — from


development to deployment and operations.

Motivation for DevOps

Traditional software practices faced challenges like:

 Slow and infrequent releases

 Miscommunication between development and operations teams

 High failure rates in deployments

DevOps emerged to:

 Reduce time to market

 Increase deployment frequency

 Improve collaboration between development and operations

 Ensure faster recovery from failures

Page 20
Benefits of DevOps

 Faster development and deployment cycles

 Improved product quality and stability

 Automation reduces manual and repetitive tasks

 Quicker bug identification and resolution

 Higher customer satisfaction due to frequent updates

Core DevOps Practices

Practice Description

Continuous Integration Regularly integrating code changes into a shared


(CI) repository and testing automatically.

Automating the release process so code changes can be


Continuous Delivery (CD)
deployed quickly.

Infrastructure as Code Provisioning infrastructure using code (e.g., scripts,


(IaC) templates).

Automated Testing & Testing and monitoring systems continuously to detect


Monitoring issues early.

Using tools like Slack, Jira, GitHub for efficient team


Collaboration Tools
collaboration.

Cloud Computing in DevOps

The cloud plays a significant role in modern DevOps practices by providing on-
demand infrastructure, scalability, and DevOps toolchains.

Key Contributions of Cloud in DevOps

Page 21
 Easily scale up/down infrastructure resources

 Provides automated backup and recovery

 Enables CI/CD pipelines and DevOps tools to run efficiently

Cloud Service Models


Cloud services are generally classified into three major models:

1. IaaS (Infrastructure as a Service)


What It Is: IaaS provides virtualized computing resources like virtual machines,
storage, and networking via the internet. It is ideal for those who want control over
their environment without managing physical hardware.

Key Features:

 Users manage: OS, applications, data

 Provider manages: virtualization, hardware, network

Examples:

 AWS EC2 (Elastic Compute Cloud)

 Google Compute Engine

Use Case: Best suited for system admins and developers needing custom
environments or managing large-scale infrastructures.

AWS EC2 – In Practice: Amazon EC2 (Elastic Compute Cloud) is a widely used
IaaS offering that provides scalable virtual servers (called instances) in the AWS
Cloud.

Steps to Create an EC2 Instance:

1. Login to AWS Console → Navigate to EC2 Dashboard

2. Launch Instance → Choose an AMI (Amazon Machine Image) like Ubuntu or


Windows

Page 22
3. Select Instance Type → Choose based on CPU/RAM requirements (e.g.,
t2.micro)

4. Configure Instance → Network, storage, IAM roles, etc.

5. Add Storage → Specify EBS volume (Elastic Block Store)

6. Configure Security Group → Open necessary ports (e.g., SSH for Linux)

7. Launch and Connect → Use SSH key or EC2 Connect to access instance

Benefits:

 On-demand pricing (pay-as-you-go)

 Scalability: Can scale up or down based on traffic

 Flexibility: Wide range of instance types

2. PaaS (Platform as a Service)


What It Is: PaaS provides a platform with tools for developing, testing, deploying,
and maintaining applications. Users focus on the application itself, not the
infrastructure.

Key Features:

 Users manage: application & data

 Provider manages: OS, servers, runtime, middleware

Examples:

 Google App Engine

 Microsoft Azure App Services

 AWS Elastic Beanstalk

AWS Elastic Beanstalk – In Practice: AWS Elastic Beanstalk is a fully managed


PaaS service that helps developers deploy applications (like Java, .NET, Python,
Node.js) quickly without worrying about infrastructure.

Page 23
Key Features:
 Automatically handles provisioning, load balancing, scaling, and monitoring

 Supports various development stacks

 Easily integrates with Git, Docker, CI/CD tools

Steps to Deploy an App Using Elastic Beanstalk:

1. Package your application code

2. Login to AWS Console → Go to Elastic Beanstalk service

3. Create New Application → Enter name, platform (e.g., Python)

4. Upload Your Code → Via ZIP or Git repository

5. Deploy Environment → Beanstalk creates EC2 instances, load balancers,


databases

6. Monitor & Scale → Use dashboard metrics and auto-scaling features

Use Case: Great for startups and developers who want to deploy code without
managing any backend.

3. SaaS (Software as a Service)

What It Is: SaaS delivers fully functional applications over the internet. Users
access them via a web browser or app without installing or maintaining anything.

Key Features:

 Everything managed by the provider

 Subscription or pay-as-you-go pricing

 Accessible from anywhere with internet

Examples:

 Gmail

 Google Drive

Page 24
 Microsoft Office 365

Use Case: Best for end-users who need ready-to-use software for daily tasks like
communication, documentation, or CRM.

Example: Explain how various types of cloud-based solutions can be applied to


design and deploy a real-time order tracking feature in a courier delivery
application. Highlight the role of infrastructure management, application
development platforms, and ready-to-use software tools in this context.

Cloud-based solutions are very useful for building a real-time order tracking system
in a courier delivery app, where users can see the live location of their parcels and
receive updates.

1. Infrastructure Management (IaaS)


Cloud providers like AWS, Google Cloud, and Azure offer servers and
databases that store tracking data and handle user requests. These servers
process GPS data and show it to users in real-time.
2. Scalable Backend
As more users track their orders, the system can automatically scale to handle
more data and users without slowing down.
3. Application Development Platforms (PaaS)
Platforms like Firebase, AWS Amplify, or Heroku help developers build apps
faster. They offer tools for handling live data, user authentication, and data
storage.
4. Real-Time Data Updates
Services like Firebase Real-time Database or AWS AppSync help update the
parcel’s location live on the user’s screen as the delivery person moves.
5. Authentication and Permissions
PaaS platforms provide built-in login systems to make sure only the correct user
can track their own parcel.
6. Ready-to-Use Software Tools (SaaS)
Services like Mapbox, Google Maps APIs, or HERE Location Services
provide real-time maps and live GPS tracking features. These tools are easy to
integrate into the app.
7. Push Notifications
Using tools like Firebase Cloud Messaging (FCM), users can get alerts when
their parcel is picked up, out for delivery, or delivered.
8. Security and Privacy
Cloud services ensure that user location and delivery data are protected using
encryption, secure APIs, and access controls.

Page 25
9. Monitoring and Analytics
Cloud platforms help monitor delivery status, app usage, and errors. This helps
companies improve service quality.
10. Cost and Speed Benefits
Using cloud services reduces setup costs and development time, making it
affordable and quicker to launch the tracking system.

Example: Apply IaaS, PaaS, and SaaS cloud models to implement an


online payment system in an e-commerce app. Explain how each model
helps in DevOps activities.
Applying Cloud Models in DevOps for Online Payment Feature
We are adding a secure online payment system in an e-commerce app. DevOps
teams use cloud models to build, test, deploy, and manage this feature efficiently.

1. IaaS (Infrastructure as a Service)

Use Case: Host secure payment APIs

 Use Azure Virtual Machines or AWS EC2 to run custom-built payment


services.

 Deploy backend code for handling payment gateway communication (e.g.,


Razorpay, Stripe).

 DevOps automates deployment and scaling using tools like Terraform or


Ansible.

DevOps Benefits:

 Full control over server security and configurations

 Supports high customization (encryption, logging)

 Automate testing and deployment of backend code

2. PaaS (Platform as a Service)

Page 26
Use Case: Quick deployment of transaction management service

 Use Google App Engine or AWS Elastic Beanstalk to deploy a microservice


that tracks order payments.

 The platform handles load balancing, health checks, and scaling automatically.

 Connects with databases (e.g., Firebase, DynamoDB) for payment history.

DevOps Benefits:

 Faster deployments with minimal infrastructure setup

 Easily integrates with DevOps CI/CD pipelines

 Automatically manages performance under heavy traffic

3. SaaS (Software as a Service)

Use Case: Use pre-built payment and DevOps tools

 Use Stripe or Razorpay APIs for secure and ready-to-use payment gateway
services.

 Use GitHub for version control and Jenkins for CI/CD automation.

 Monitor service with tools like Datadog or New Relic.

DevOps Benefits:

 No need to build payment system from scratch

 Easy integration with deployment and monitoring tools

 Saves development time and effort

Cloud Services in DevOps Environments

Modern DevOps practices heavily rely on cloud platforms for hosting, deploying,
monitoring, and scaling applications. These platforms offer powerful services that
support automation, elasticity, and integration throughout the DevOps lifecycle.

Page 27
Major Cloud Providers and Services

1. Amazon Web Services (AWS)


Offers widely used DevOps tools such as:

o EC2 (Elastic Compute Cloud): for scalable virtual machines

o S3 (Simple Storage Service): for cloud-based storage

o Code Pipeline: for building CI/CD pipelines

2. Microsoft Azure
Provides tools such as:

o Azure DevOps: for planning, coding, building, testing, and deploying

o Azure Kubernetes Service (AKS): for container orchestration

3. Google Cloud Platform (GCP)


Offers:

o Cloud Build: a server less CI/CD platform

o GKE (Google Kubernetes Engine): for deploying and managing


containers at scale

Key Concepts in Cloud-based DevOps


Auto-Scaling

Auto-scaling refers to the cloud’s ability to automatically increase or decrease


computing resources such as virtual machines depending on the workload.

Example:
During a sudden traffic spike on an e-commerce website (such as during a flash sale),
auto-scaling increases the number of servers to handle the load. When the traffic
reduces, it automatically reduces the number of servers to optimize cost.

Benefit to DevOps:

 Ensures application availability and performance during varying loads

Page 28
 Reduces manual intervention in infrastructure scaling

 Supports automation in deployment environments

Elasticity

Elasticity is the ability of the cloud infrastructure to adjust its resources dynamically
based on workload fluctuations. It is closely related to auto-scaling but focuses more
on how efficiently the system can respond to changes.

Example:
An online food delivery app might experience high usage during lunch and dinner
hours. The cloud infrastructure can scale up during those times and scale down during
off-peak hours automatically.

Benefit to DevOps:
 Ensures applications remain responsive under different workloads

 Enables cost efficiency by using only the required resources

 Allows systems to adapt quickly without manual configuration

CI/CD Integration with Cloud


CI (Continuous Integration) and CD (Continuous Delivery/Deployment) involve the
automated process of building, testing, and deploying applications when new code
changes are pushed.

How Cloud Enables This:


Cloud platforms such as AWS, Azure, and GCP provide native CI/CD tools. These
tools integrate with code repositories like GitHub and GitLab to automate the
development pipeline.

Example Scenario:
A developer commits code to GitHub. The CI/CD tool (such as AWS CodePipeline) is
triggered automatically. It:

 Builds the new version of the application

 Runs automated tests to verify the code

 Deploys the application to cloud infrastructure like EC2 or Elastic Beanstalk

Page 29
Benefit to DevOps:

 Enables faster and more reliable deployments

 Catches bugs early through automated testing

 Reduces manual effort in the release process

 Encourages continuous feedback and delivery’

DevOps Operations

In the DevOps lifecycle, the Operations phase is crucial for ensuring that deployed
applications run smoothly, securely, and reliably. It includes monitoring systems,
enforcing security policies, managing incidents, and continuously gathering feedback
to improve the product.

1. Monitoring & Alerting

Purpose:
To provide real-time visibility into the performance and health of applications and
infrastructure.

Tools Used:
 Prometheus: A powerful open-source tool used to collect metrics from systems
and services.

 Grafana: A visualization tool often used with Prometheus to create dashboards.

 Nagios: A classic monitoring tool for server health and network monitoring.

Key Features:

 Continuously tracks metrics like CPU usage, memory, response time, etc.

 Sends alerts via email, Slack, or SMS when failures or performance anomalies
occur.

 Helps teams detect issues before they impact users.

Example:
If a server exceeds CPU threshold (e.g., 90%), Prometheus triggers an alert. Grafana

Page 30
shows this on a dashboard. Engineers are notified instantly to take action.

2. Security Policies & Automation (DevSecOps)

What is DevSecOps?
DevSecOps integrates security into every stage of the DevOps pipeline rather than
treating it as a separate phase.

Key Practices:

 Automated Security Scans: Tools automatically scan code and infrastructure


for vulnerabilities (e.g., dependency scanners, SAST tools).

 Compliance Checks: Automatically ensure the system meets security


standards (like ISO, GDPR).

 Role-Based Access Control (RBAC): Only authorized users can access


specific parts of the system.

Benefits:
 Reduces human errors and delays in security enforcement

 Ensures secure code and infrastructure from development to production

 Helps in early detection of threats and faster mitigation

3. Incident & Problem Management

Purpose:
To detect system failures quickly, recover fast, and analyze the cause to prevent future
occurrences.

Key Activities:

 Incident Detection: Using monitoring tools to identify unexpected behavior or


system outages

 Recovery: Restarting services, scaling infrastructure, or rolling back changes to


restore service

 Root Cause Analysis (RCA): Investigating the origin of the problem (e.g., a

Page 31
faulty deployment or hardware issue)

 Logging & Rollback: Logs help understand system behavior; rollback plans
enable quick reversal of faulty deployments.

Example:
If a new version of an app crashes, engineers use logs to identify the error, roll back to
the previous version, and update the pipeline to prevent the issue from recurring.

4. Feedback Mechanisms

Purpose:
Feedback ensures that DevOps teams can continuously improve the application and
processes based on data.

Types of Feedback:

 System Feedback:

o Metrics like latency, uptime, and error rates

o Logs (e.g., using ELK Stack – Elasticsearch, Logstash, Kibana)

o Error reporting tools like Sentry for real-time alerts on bugs

 User Feedback:
o Feature usage statistics

o User ratings and reviews

o Support tickets or surveys

Benefits:

 Identifies performance issues, usability problems, or bugs early

 Drives product improvements and enhances user satisfaction

 Enables data-driven decision-making

Page 32
Deployment Pipeline
A Deployment Pipeline is a set of automated stages through which software passes
from development to production. It embodies the CI/CD (Continuous
Integration/Continuous Delivery or Deployment) philosophy in DevOps.

Purpose of a Deployment Pipeline:

 Automate the delivery of software updates.

 Ensure only validated and tested code is deployed.

 Detect bugs and integration issues early.

 Enable faster and more reliable releases.

Overall Architecture – Stages of Deployment Pipeline

Page 33
Explanation of Each Stage
1. Source Code Repository
 Code is stored and version-controlled using tools like Git, GitHub, Bitbucket,
or GitLab.

 Developers push/merge code into the main branch, triggering the pipeline.

2. CI Stage (Build & Unit Testing)

 CI tools like Jenkins, CircleCI, or GitHub Actions automatically:

o Pull the latest code

o Compile and build the application

o Run unit tests to validate logic and basic functionality

 Feedback is sent immediately to developers on success/failure.

Page 34
3. Testing Stage
 Involves deeper levels of automated testing:

o Integration Testing

o Functional/UI Testing (e.g., Selenium)

o Regression Testing

 Ensures that new changes don’t break existing features.

4. Staging Environment

 Code is deployed to a staging server, which closely mirrors the production


setup.

 Final manual tests or UAT (User Acceptance Testing) may occur here.

 Helps stakeholders preview changes before going live.

5. Production Deployment

 Code is released to live users.

 Can be done using:

o Blue-Green Deployment

o Canary Deployment

o Rolling Updates

 Tools like AWS CodeDeploy, Kubernetes, or Docker Swarm help automate


this step.

Benefits of Deployment Pipeline Architecture

 Automation reduces errors and manual overhead.

 Ensures high-quality code reaches production.

Page 35
 Supports rapid, frequent deployments with confidence.

 Builds a feedback loop for continuous improvement.

DevOps Tools Commonly Used in Pipelines

 Version Control: Git, GitHub, GitLab

 CI/CD: Jenkins, GitHub Actions, GitLab CI, CircleCI

 Build & Test: Maven, Gradle, JUnit, Selenium

 Containerization: Docker

 Orchestration: Kubernetes

 Deployment: AWS CodePipeline, Azure DevOps, ArgoCD

Micro services Architecture

Definition:

Micro services Architecture is a software design approach where a large application is


divided into smaller, independent services. Each service performs a specific
function and communicates with others via APIs (usually HTTP or messaging
queues).

Key Characteristics:
 Modularity: Each micro service corresponds to a specific business capability
(e.g., Login, Payment, Orders).

 Independent Development: Teams can develop, test, deploy, and scale micro
services separately.

Page 36
 Decentralized Data: Each service may have its own database.

 Fault Isolation: Failure in one service does not crash the entire system.

Example: Food Delivery App

 Login Service: Handles user authentication.

 Restaurant Service: Manages restaurant listings and menus.

 Order Service: Handles food orders.

 Payment Service: Processes payments.

 Delivery Tracking Service: Shows delivery status.

Each of these services can be developed and updated independently.

Benefits:

 Easier to scale specific services (e.g., scale only the Order service during lunch
hours).

 Faster time-to-market through parallel development.

 Easier maintenance and debugging.

Example:
DevOps Pipeline Architecture

In a DevOps environment, a deployment pipeline automates all the steps required to


deliver a new feature from code development to production. It ensures code quality,
testing, continuous delivery, and faster feature rollout. Below are three comprehensive
real-time examples demonstrating how deployment pipelines work in different
application domains.

Example 1: Food Delivery App – Feature: Live Order Tracking

Page 37
Scenario:
A feature is introduced to allow customers to view real-time location updates of their
delivery agents.

Pipeline Stages:

1. Code Commit

o Developers implement GPS integration.

o Code is pushed to GitHub for version control.

2. Continuous Integration (CI)

o Jenkins automatically triggers the build process.

o Dependencies such as Google Maps API are configured.

o Linting and static code analysis tools check for issues.

3. Automated Testing

o Unit Tests validate GPS functions.

o Integration Tests ensure that GPS integrates with the order management
system.

o Tests are automatically run using tools like JUnit or Selenium.

4. Staging Environment

o QA team simulates live order scenarios with mock users and locations.

o Load and performance testing is conducted.

5. Deployment to Production

o Canary Deployment is used to release to a small group of users (10%).

o Once verified, the update rolls out to all users.

Results:

 Lower failure risk

 Incremental rollout ensures safe user experience

Page 38
 Fast rollback possible if bugs are found

Example 2: Banking App – Feature: Daily Spending Summary

Scenario: A feature that gives users a visual summary of their daily expenses.

Pipeline Stages:

1. Code Commit

o Backend handles transaction analysis.

o Frontend UI displays summary.

o Code pushed to GitLab repository.

2. CI/CD Pipeline
o GitLab CI/CD builds the backend and frontend.

o Validates configurations and libraries.

3. Automated Testing

o Unit Tests for filters and calculations.

o Functional Tests ensure UI correctly reflects data.

4. Staging
o Simulated accounts and mock transaction data.

o Tests for security compliance and performance.

5. Production Deployment

o Automated release through AWS Code Deploy.

o Error monitoring tools like ELK stack observe live performance.

Results:

 Secure and consistent deployment

 Higher feature reliability

Page 39
 Faster feedback for enhancements

Example 3: E-Learning Platform – Feature: Quiz Countdown Timer

Scenario:

To prevent cheating, a countdown timer is added for time-limited quizzes.

Pipeline Stages:

1. Code Commit

o Timer logic built with React (frontend) and Node.js (backend).

o Code is versioned through GitHub.

2. CI Process
o GitHub Actions build and run unit tests.

o Ensures dependencies like moment.js and timer libraries are installed.

3. Test Automation

o Verifies timer accuracy and edge cases.

o Tests auto-submission after time expires.

4. Staging Review
o QA and instructors simulate quizzes.

o Validate behaviour and cross-device compatibility.

5. Deployment

o Uses Blue-Green Deployment.

o New environment tested before routing user traffic.

Results:

 Zero downtime

 Bug-free timer behaviour in production

Page 40
Safe rollback if failure occurs

Containers & Orchestration


A container is:

 A runtime environment that packages an application and its dependencies


together.

 Isolated from the host system and other containers.

 Faster and more efficient than traditional virtual machines because they share
the host OS kernel instead of needing a full OS.

Why Use Containers?


 Portability: Run the same container anywhere — dev, test, or prod.

 Consistency: No "it works on my machine" issue.

 Scalability: Easy to replicate and scale containers.

 Isolation: Each container runs in its own isolated environment.

1. Docker (Containerization Tool):

Definition:
Docker is a tool used to package applications and their dependencies into lightweight,
portable containers.

Why use Docker?

 Solves the “it works on my machine” problem.

 Ensures consistency across development, testing, and production


environments.

 Applications inside containers run the same way on any system that supports

Page 41
Docker.

Key Benefits:

 Fast deployment and startup

 Efficient use of system resources

 Easy version control and rollback

Example: You can run the "Payment Service" of your app in a container that includes
Node.js, payment API libraries, and config files—all bundled together.

2. Kubernetes (Container Orchestration Tool):

Definition:
Kubernetes (K8s) is an open-source platform that automates deployment, scaling,
and management of containerized applications.

What it does:

 Starts and stops containers as needed

 Restarts crashed services automatically

 Distributes traffic between services (load balancing)

 Scales services based on demand (auto-scaling)

Key Features:

 Self-healing: If a service fails, it gets restarted automatically.

 Horizontal scaling: Add more instances (pods) of a service.

Page 42
 Service Discovery & Load Balancing: Manages internal communication
between micro services.

Example:
If the "Order Service" becomes busy during peak hours, Kubernetes can automatically
spin up additional containers to handle the load.

Building and Testing in the Deployment Pipeline


A deployment pipeline automates the journey from code creation to final deployment.
Two crucial stages in this pipeline are Building and Testing, which ensure the
software is robust, error-free, and production-ready.

1. Code Commit & Pre-Checks

Before the build process begins, developers push (commit) their code to a shared
repository (like GitHub, GitLab, Bitbucket).

Pre-check Activities:

 Code Formatting Check: Ensures code follows proper indentation, naming


conventions, etc.

 Static Code Analysis: Scans code for vulnerabilities, potential bugs, or


unreachable code.

 Linting: Identifies syntax and style errors (e.g., ESLint for JavaScript, Pylint
for Python).

 Code Review(optional but recommended): Peer developers review code


changes before merging.

Tools Used:

 Linters: ESLint, Pylint, Flake8

 Static Analysis: SonarQube, Checkstyle, PMD

Page 43
2. Continuous Integration (CI) Build

Once code is committed, CI tools are triggered to build the application automatically.

Key Activities:

 Compilation: Source code is converted to executable code (e.g., Java →


bytecode).

 Dependency Management: Tools download and manage required


libraries/frameworks.

 Packaging: The build generates deployable artifacts (like .jar, .war, .zip, or
Docker images).

CI Tools:
 Jenkins

 GitHub Actions

 GitLab CI/CD

 CircleCI

 Travis CI

Benefits:

 Detect build issues early

 Ensures team members don't break shared codebase

 Faster feedback loop for developers

3. Automated Testing

Automated tests are critical for validating code behavior and preventing regressions.

Test Categories:

a. Unit Testing

Page 44
 Tests individual units/modules (functions or methods)

 Isolated: No external dependencies (like databases or APIs)

 Tools: JUnit (Java), pytest (Python), NUnit (.NET)

b. Integration Testing

 Tests how different modules interact with each other

 Example: Login module interacting with the user database

 Tools: TestNG, Postman (for APIs), Spring Test

c. Regression Testing

 Ensures that new changes haven’t broken existing features

 Often automated with previous test cases

d. UI/Functional Testing (optional in this stage)

 Simulates user interactions to test front-end functionality

 Tools: Selenium, Cypress, Playwright

4. QA / Staging Environment Testing


This phase mimics the production environment and is used for final validation before
deployment.

Activities in QA/Staging:

 Smoke Testing: Basic checks to ensure the build is stable

 Load Testing: Assess application performance under expected user load

 Stress Testing: Determine how system behaves under extreme conditions

 Security Testing: Identify vulnerabilities like SQL injection or XSS

 UAT (User Acceptance Testing): Stakeholders validate if requirements are


met

Page 45
Tools:
 JMeter (Load Testing)

 OWASP ZAP (Security)

 Locust, Gatling (Performance Testing)

Purpose:

 Catch issues that weren't evident in development

 Simulate real-world usage scenarios

 Ensure stability before moving to production

Why It Matters in DevOps:


 Ensures automation, speed, and quality

 Reduces risk of failed deployments

 Supports Continuous Integration and Delivery (CI/CD) pipelines

DevOps Tools and Their Roles


DevOps is highly tool-driven. Various tools are used for automation, integration,
testing, deployment, and infrastructure management. Below is a detailed
description of commonly used tools and how they contribute to the DevOps lifecycle.

Page 46
1. Jenkins
 Category: Continuous Integration/Continuous Deployment (CI/CD)

 What It Is:
Jenkins is a widely used open-source automation server that enables
developers to build, test, and deploy their code in a controlled and repeatable
way.

 Features:

o Supports integration with hundreds of plugins (e.g., Git, Docker, Maven).

o Automates workflows like code compilation, unit testing, packaging, and


deployment.

o Easy to configure pipelines using GUI or code (Jenkinsfile).

 Use Case:
Automating the entire CI/CD pipeline. For example, whenever a developer
pushes code to GitHub, Jenkins pulls it, builds it, runs tests, and deploys the
app.

2. GitLab CI
 Category: Integrated CI/CD Platform

 What It Is:
GitLab CI/CD is a built-in DevOps tool within the GitLab platform. It
automates code build, test, and deployment stages based on events in your Git
repository.

 Features:

o Full CI/CD integration with GitLab version control.

o Uses .gitlab-ci.yml for defining pipeline stages.

o Supports parallel jobs, caching, and conditional workflows.

Page 47
 Use Case:
Ideal for teams using GitLab as their version control system. Streamlines the
development-to-deployment workflow within a single platform.

3. Spinnaker

 Category: Continuous Delivery (CD)

 What It Is :
Spinnaker is an open-source, multi-cloud continuous delivery platform
designed for safe, fast, and repeatable deployments.

 Features:

o Supports multi-cloud deployments (AWS, GCP, Kubernetes, Azure).

o Offers automated rollbacks, blue-green and canary deployments.

o Provides rich deployment dashboards and approval gates.

 Use Case:
Best for enterprises deploying apps across multiple cloud environments with
strict compliance and rollback strategies.

4. Ansible

 Category: Infrastructure Automation & Configuration Management

 What It Is:
Ansible is a lightweight automation engine used for automating provisioning,
configuration, and orchestration of infrastructure.

 Features:

o Uses simple YAML files (playbooks) to define infrastructure as code.

o Agentless — runs over SSH without needing client agents.

o Can be used for automating patching, firewall setup, database installs,


etc.

Page 48
 Use Case:
Provision a fleet of servers, configure environments, and deploy applications —
all through scripts.

5. AWS Code Deploy

 Category: Deployment Automation

 What It Is:
AWS Code Deploy is a fully managed deployment service from Amazon that
automates software deployments to:

o Amazon EC2 instances

o AWS Lambda

o On-premises servers

 Features:

o Supports rolling updates, blue/green deployments.

o Integrates with AWS CodePipeline for end-to-end DevOps.

o Tracks deployment status and failure rollback automatically.

 Use Case:
Automate deployment of web apps, backend services, or microservices in a
controlled, monitored environment using AWS infrastructure.

6. Helm

 Category: Kubernetes Package Management

 What It Is:
Helm is a package manager for Kubernetes that helps you define, install, and
upgrade Kubernetes applications using reusable templates called charts.

 Features:

o Simplifies complex Kubernetes deployments into one-click installs.

o Manages application versions, dependencies, and rollback easily.

Page 49
o Supports values.yaml files for dynamic configurations.

 Use Case:
Use Helm to deploy and manage cloud-native apps (like databases, monitoring
tools, micro services) in Kubernetes clusters efficiently.

CASE STUDY
Case Study: Atlassian DevOps Transformation

Problem Statement
Atlassian, a software company that develops tools like Jira and Confluence, was
facing several major issues:

 Deployment cycles were slow because of a monolithic architecture.

 It was difficult to scale individual parts of the application independently.

 Teams were dependent on each other, which delayed releases.

 Manual deployment processes increased the risk of errors and failures.

Solution: DevOps and Micro services Approach


To solve these problems, Atlassian adopted DevOps practices and restructured their
application into micro services architecture.

Key strategies included:


 Breaking down the large monolithic application into smaller, independent
services such as login, search, and notifications.

 Automating the entire software delivery process using CI/CD tools.

 Migrating to cloud infrastructure to gain flexibility and better scalability.

Page 50
Tools Used in DevOps Pipeline

Tool Purpose

Docker To create containerized environments for microservices

Bitbucket For source code management and team collaboration

Bamboo To automate building, testing, and deploying applications

AWS For hosting services in the cloud with scalability and reliability

Benefits of DevOps Adoption

Faster Releases
Each micro service could be developed and deployed separately. This allowed the
team to release updates more frequently and quickly.

Improved Scalability
Individual services could be scaled based on their demand without affecting other
parts of the system.

Higher Reliability
Failures in one service did not affect the functioning of others. This improved the
overall stability of the application.

Key DevOps Metrics Tracked

 Mean Time to Recovery (MTTR):


The time taken to fix and recover from failures decreased significantly because
of automation and better monitoring.

 Deployment Frequency:
Atlassian was able to shift from monthly releases to daily deployments.

 Error Rate:
The number of bugs and failures was reduced due to better testing practices and
gradual rollouts.

Page 51

You might also like