Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
13 views12 pages

Module 5 CD

The document outlines the differences between Continuous Delivery and Continuous Deployment in software development, highlighting their key features, workflows, pros, and cons. It also discusses the importance of a Deployment Pipeline, its stages, and various deployment strategies, including Basic, Multi-Service, Rolling, Blue-Green, and Canary deployments. Additionally, it covers artifact management, its benefits, and best practices for managing software artifacts.

Uploaded by

msharshitha03.s
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views12 pages

Module 5 CD

The document outlines the differences between Continuous Delivery and Continuous Deployment in software development, highlighting their key features, workflows, pros, and cons. It also discusses the importance of a Deployment Pipeline, its stages, and various deployment strategies, including Basic, Multi-Service, Rolling, Blue-Green, and Canary deployments. Additionally, it covers artifact management, its benefits, and best practices for managing software artifacts.

Uploaded by

msharshitha03.s
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Module 5

Continuous Delivery vs. Continuous Deployment

Continuous Delivery (CD) and Continuous Deployment (CD) are software development practices that
automate the release process, making deployments faster, more reliable, and more efficient.

1. Continuous Delivery:

Continuous Delivery ensures that software is always in a deployable state. Every code change is
automatically tested and prepared for deployment, but the actual release to production requires
manual approval.

Key Features:

 Automated build, test, and integration processes.


 Deployments to staging environments for validation.
 Manual approval required for production release.
 Reduces deployment risks by making incremental updates.

Example Workflow:

 Developer pushes code to a repository (e.g., GitHub).


 CI/CD pipeline runs automated tests.
 If tests pass, code is deployed to a staging environment.
 Manual approval triggers deployment to production.

Pros:

 Reduces deployment risk by testing in staging environments.


 Allows better control over releases.
 Enables faster, more frequent releases.

Cons:

 Requires manual intervention, which may slow down deployment.


 Human errors in approvals could delay releases.

2. Continuous Deployment:

Continuous Deployment goes one step beyond Continuous Delivery by automatically deploying every
change that passes the automated testing phase directly to production, without human intervention.

Key Features:

 Every code change is deployed to production automatically.


 Requires a strong automated testing suite.
 No manual approval needed.

Example Workflow:

 Developer pushes code to a repository.


 CI/CD pipeline runs automated tests.
 If tests pass, code is automatically deployed to production.

Pros:

 Faster time to market.


 No delays due to manual approvals.
 Encourages small, frequent releases, reducing the risks of large failures.

Cons:

 A bug in the code can directly impact users.


 Requires a mature testing and rollback strategy.

Which One to Choose?

 Choose Continuous Delivery if you need human approval before releases (e.g., banking,
healthcare).
 Choose Continuous Deployment if your business requires rapid iterations and frequent
updates (e.g., SaaS, e-commerce).

Benefits of Continuous Deployment

Continuous deployment provides numerous benefits to organizations, including the following:

a. Improved quality
b. Faster time to market
c. Enhanced customer experience
d. Reduced costs
e. Better team collaboration
f. Accelerated feedback loop
a. Improved quality

Automated testing—the most critical dependency for continuous deployment—occurs at each stage
of the deployment pipeline lifecycle. This capability improves the overall quality of the deployment
experience. For instance, automated testing can debug errors before they reach production.
b. Faster time to market

Continuous deployment helps deliver updates and software releases quickly. Once new updates pass
predefined tests, the system automatically pushes them to the software's end users.

c. Enhanced customer experience

Automated testing allows development teams to quickly and consistently deploy new features and
improvements for enhanced customer experience.

d. Reduced costs

Automating the deployment eliminates bottlenecks and reduces manual tasks. This process helps
businesses save costs by reducing downtime.

e. Better team collaboration

Continuous deployment frees up developers to focus more on writing code and performing tests
rather than on manual deployment procedures. It also supports team collaboration and
communication by providing a single view across all applications and environments.

f. Accelerated feedback loop

Continuous deployment accelerates the feedback loop by allowing developers to release code
changes frequently. This capability reduces the time that it takes to receive feedback from users and
stakeholders.

What is a Deployment Pipeline and How to Build it?

A deployment pipeline is an essential DevOps testing strategy that automates the software delivery
process, ensuring rapid and reliable application deployments. It provides a structured approach for
integrating, testing, and releasing code changes, allowing teams to detect and resolve issues early.

What is a Deployment Pipeline?

A Deployment Pipeline is an automated workflow that takes code changes from development
through to production. It consists of stages, such as building, testing, and deploying, to ensure the
software is correctly integrated, verified, and ready for release.

Each step in the pipeline checks code quality and functionality to catch issues early. It improves the
speed, reliability, and safety of deployments.

Benefits of a Deployment Pipeline

The key benefits of a well-implemented deployment pipeline are listed below:

 First and foremost, it enables faster time-to-market for new features and bug fixes by
automating and expediting the software delivery process.
 The pipeline allows for early detection and resolution of defects, reducing the risk of
deployment failures and costly rollbacks.
 It fosters collaboration between development, testing, and operations teams, promoting a
culture of shared responsibility and continuous improvement.
 Automated processes reduce manual interventions, enabling faster and more frequent
releases, accelerating time-to-market for new features and bug fixes.
 Automation testing at each stage ensures that bugs and issues are identified early in the
development cycle, minimizing the cost and effort of fixing them.
 The pipeline ensures consistency across all environments, reducing the chances of
configuration errors that may cause discrepancies between development, staging, and
production environments.
 Deployment pipelines encourage collaboration between development, testing, and
operations teams, promoting a culture of shared responsibility and faster feedback loops.
 Automated testing and validation processes help catch issues before they reach production,
leading to fewer deployment failures and rollbacks.
 Continuous integration and automated security checks ensure that code meets security and
compliance standards before deployment.

Stages of a Deployment Pipeline

A deployment pipeline consists of a series of automated stages that code changes must pass through
before being deployed to production. Each stage is designed to verify the code's quality,
functionality, and compatibility, ensuring that the software release is efficient, reliable, and maintains
consistent quality across different environments.

We will explore the main stages of a deployment pipeline and their significance in the software
development process.

Stage 1: Commit Stage:

The deployment pipeline starts with the Commit stage, triggered by code commits to the version
control system (VCS). In this stage, the code changes are fetched from the VCS, and the build server
automatically compiles the code, running any pre-build tasks required. The code is then subjected to
static code analysis to identify potential issues, such as coding standards violations or security
vulnerabilities. If the code passes these initial checks, the build artifacts are generated, which serve
as the foundation for subsequent stages.

Stage 2: Automated Testing Stage:

After the successful compilation and artifact generation in the Commit stage, the next stage involves
automated testing. Various tests are executed in this stage to ensure the code’s functionality,
reliability, and performance.

 Unit tests, which validate individual code components, are run first, followed by integration
tests that verify interactions between different components or modules.

 Functional tests check whether the application behaves as expected from an end-user
perspective.

Stage 3: Staging Deployment:

The application is deployed to a staging environment once the code changes have successfully
passed the automated testing stage. The staging environment resembles the production
environment, allowing for thorough testing under conditions that simulate real-world usage. This
stage provides a final check before the application is promoted to production.

Stage 4: Production Deployment:

The final stage of the deployment pipeline is the Production Deployment stage. Once the application
has passed all the previous stages and received approval from stakeholders, it is deployed to the
production environment.

 This stage requires extra caution as any issues or bugs introduced into the production
environment can have significant consequences.
 To minimize risk, organizations often use deployment strategies such as canary releases,
blue-green deployments, or feature toggles to control the release process and enable easy
rollback in case of any problems.

 Continuous production environment monitoring is also essential to ensure the application’s


stability and performance.

Build Pipeline vs Deployment Pipeline

A Build Pipeline focuses on compiling code and preparing it for testing, while a Deployment Pipeline
handles the stages required to release tested code into production.

Aspect Build Pipeline Deployment Pipeline

Primary Focus Compiling code and creating an Releasing code to production


executable build environments

Key Stages Code compilation, dependency Acceptance testing, staging


management, unit testing deployment, production deployment

End Goal Produces an artifact ready for further Deploys a stable, tested build to
testing users

Automation Level Often fully automated for frequent May include approvals or manual
integration steps, especially for production

Trigger Typically triggered on code commit Triggered after build pipeline success
or merge and further validations

Tools Used Build tools like Maven, Gradle, or Deployment tools like Jenkins, AWS
npm CodePipeline, GitLab CI

Involves Testing? Mostly unit and integration tests Focuses on staging, acceptance, and
post-deployment monitoring

Deployment Strategies to Consider

Deployment strategies are practices used to change or upgrade a running instance of an application.
The following sections will explain six deployment strategies. Let’s start with discussing the basic
deployment.

The Basic Deployment

In a basic deployment, all nodes within a target environment are updated at the same time with a
new service or artifact version. Because of this, basic deployments are not outage-proof, and they
slow down rollback processes or strategies. Of all the deployment strategies shared, it is the riskiest.
Pros:

The benefits of this strategy are that it is simple, fast, and cheap. Use this strategy if 1) your
application service is not business, mission, or revenue-critical, or 2) your deployment is to a lower
environment, during off-hours, or with a service that is not in use.

Cons:

Of all the deployment strategies shared, it is the riskiest and does not fall into best practices. Basic
deployments are not outage-proof and do not provide for easy rollbacks. When combined with
manual processes, they are the greatest risk of a deployment disaster.

The Multi-Service Deployment

In a multi-service deployment, all nodes within a target environment are updated with multiple new
services simultaneously. This strategy is used for application services that have service or version
dependencies, or if you’re deploying off-hours to resources that are not in use.
Pros:

Multi-service deployments are simple, fast, cheap, and not as risk-prone as a basic deployment.

Cons:

Multi-service deployments are slow to roll back and not outage-proof. Using this deployment
strategy also leads to difficulty in managing, testing, and verifying all the service dependencies.

Rolling Deployment

A rolling deployment is a deployment strategy that updates running instances of an application with
the new release. All nodes in a target environment are incrementally updated with the service or
artifact version in integer N batches.

Pros:

The benefits of a rolling deployment are that it is relatively simple to roll back, less risky than a basic
deployment, and the implementation is simple.

Cons:

Since nodes are updated in batches, rolling deployments require services to support both new and
old versions of an artifact. Verification of an application deployment at every incremental change
also makes this deployment slow.

Blue-Green Deployment

Blue-green deployment is a deployment strategy that utilizes two identical environments, a “blue”
(aka staging) and a “green” (aka production) environment, with different versions of an application
or service.

Blue-green deployment is a software release strategy that uses two identical environments to reduce
downtime and risk. One environment runs the current application version, while the other runs the
new version.
Quality assurance and user acceptance testing are typically done within the blue environment that
hosts new versions or changes.

User traffic is shifted from the green environment to the blue environment once new changes have
been tested and accepted within the blue environment. You can then switch to the new environment
once the deployment is successful.

Pros:

One of the benefits of the blue-green deployment is that it is simple, fast, well-understood, and easy
to implement. Rollback is also straightforward, because you can simply flip traffic back to the old
environment in case of any issues. Blue-green deployments are therefore not as risky compared to
other deployment strategies.

Cons:

Cost is a drawback to blue-green deployments. Replicating a production environment can be complex


and expensive, especially when working with microservices. Quality assurance and user acceptance
testing may not identify all of the anomalies or regressions either, and so shifting all user traffic at
once can present risks. An outage or issue could also have a wide-scale business impact before a
rollback is triggered, and depending on the implementation, in-flight user transactions may be lost
when the shift in traffic is made.

Canary Deployment

A canary deployment is a deployment strategy that releases an application or service incrementally


to a subset of users. All infrastructure in a target environment is updated in small phases (e.g: 2%,
25%, 75%, 100%). A canary release is the lowest risk-prone, compared to all other deployment
strategies, because of this control.
Pros:

Canary deployments allow organizations to test in production with real users and use cases and
compare different service versions side by side. It’s cheaper than a blue-green deployment because it
does not require two production environments. And finally, it is fast and safe to trigger a rollback to a
previous version of an application.

Cons:

Drawbacks to canary deployments involve testing in production and the implementations needed.
Scripting a canary release can be complex: manual verification or testing can take time, and the
required monitoring and instrumentation for testing in production may involve additional research.

What are Artifacts?

 Artifacts are the deliverables of the software development process, including compiled code,
libraries, dependencies, configuration files, documentation, and build outputs.

 They are the building blocks of software applications and are used in various stages of the
software development lifecycle (SDLC).

What is Artifact Management?

Artifact management is the process of storing, organizing, and distributing software artifacts (such as
binaries, libraries, Docker images, and packages) in a structured manner. It ensures reliable and
repeatable builds in CI/CD pipelines.

Artifact management, or the use of repositories for builds and packages, is crucial in software
development for storing and managing artifacts, which are compiled code, libraries, dependencies,
and other build outputs, ensuring consistency and facilitating seamless integration across the
software development lifecycle.

Why Use Artifact Repositories?

 Centralized Storage:

Artifact repositories provide a central location to store and manage artifacts, ensuring that all team
members have access to the same version of the artifacts.

 Consistency:
They ensure consistency across different stages of the software development lifecycle, from coding
to testing, build, release, and deployment.

 Version Control:

Artifact repositories allow for version control, enabling teams to track changes over time and revert
to previous versions if necessary.

 Improved Collaboration:

A centralized repository facilitates collaboration among developers, testers, and other stakeholders,
as they can easily access and share artifacts.

 Faster Builds:

By storing and retrieving artifacts from a central location, artifact repositories can significantly
improve build times.

 Dependency Management:

Artifact repositories can help manage dependencies between different components of a software
application, ensuring that all components are using the correct versions of the required libraries.

 Security:

Artifact repositories can help to secure the software development process by providing a controlled
environment for storing and distributing artifacts.

Examples of Artifacts:

 Compiled code (e.g., JAR files, executables)


 Libraries and dependencies (e.g., Maven artifacts, NuGet packages)
 Container images (e.g., Docker images)
 Configuration files
 Documentation
 Build outputs

Popular Artifact Repositories:

 JFrog Artifactory – Supports multiple package formats (Maven, npm, Docker, etc.)
 Nexus Repository Manager – Repository for Java, Docker, and other package types
 AWS CodeArtifact – Fully managed artifact store for AWS users
 GitHub Packages – Integrated with GitHub for dependency management

Types of Artifacts Stored

 Build Artifacts – .jar, .war, .zip, .tar.gz (Java, Node.js, Python, etc.)
 Container Images – Docker images stored in Docker Hub, AWS ECR, GCR
 Library Packages – npm, PyPI, Maven, Go modules
 Infrastructure as Code (IaC) – Terraform modules, Helm charts

Best Practices for Artifact Management

 Use Immutable Artifacts – Ensure artifacts are versioned and not modified.
 Tag & Version Properly – Use semantic versioning (v1.0.0, v1.0.1).
 Automate Cleanup Policies – Remove old artifacts to save storage space.
 Enable Security Scanning – Scan for vulnerabilities using tools like Trivy or Snyk.
 Use Access Controls – Restrict unauthorized access to artifact repositories.
Environment management

Environment management, encompassing development (dev), staging, and production (prod)


environments, is a crucial aspect of software development, ensuring a smooth and reliable
deployment process. Each environment serves a distinct purpose, from initial coding and testing to
final user access.

Here's a breakdown of each environment:

1. Development (Dev) Environment:

This is where developers write, test, and debug code, typically using a local or dummy database. It's a
sandbox for experimentation and innovation, allowing for rapid iteration without impacting live
systems.

Purpose:

 Used by developers to write, test, and debug code.

 Frequent code changes and commits.

 Typically runs locally or in a shared dev environment.

Characteristics:

 Fast feedback loop


 Minimal resource allocation
 Uses mock or local databases
 Often runs on Docker, Kubernetes (local), Minikube

2. Staging (Pre-Production) Environment:

A pre-production environment that mirrors the production setup as closely as possible, including
server configurations, databases, and network settings. It's used for final testing and validation
before deploying to production, allowing teams to simulate real-world usage scenarios.

Purpose:

 A production-like environment for integration testing and User Acceptance Testing (UAT).
 Validates the software with real-world configurations.

Characteristics:

 Mirrors production as closely as possible


 Uses real (or masked) data for testing
 Deployments should match production strategies (e.g., rolling updates, blue-green
deployment)
 Runs on cloud environments (AWS, Azure, GCP, Kubernetes clusters)

3. Production (Prod) Environment:

The live environment where the application is accessible to end-users. It's optimized for
performance, reliability, and security, with continuous monitoring and quick response to any issues
that arise.

Purpose:

 The final live environment where end users interact with the application.
 Must be highly stable, secure, and scalable.
Characteristics:

 Strict access control and monitoring


 Auto-scaling for high availability
 Continuous monitoring with Prometheus, Grafana, ELK Stack
 Uses Infrastructure as Code (IaC) for consistent provisioning

Introduction to Ansible for Continuous Delivery (CD)

What is Ansible?

Ansible is an open-source automation tool used for configuration management, application


deployment, and orchestration. It simplifies Continuous Delivery (CD) by automating software
deployment, environment setup, and system configuration across multiple servers.

 Agentless: No need to install anything on target machines.


 Idempotent: Ensures the same result even if run multiple times.
 YAML-Based: Uses easy-to-read playbooks for automation.
 Secure & Scalable: Uses SSH for communication without exposing credentials.

How Ansible Fits into CD Pipelines

In a CD pipeline, Ansible is commonly used for:

 Provisioning Infrastructure (e.g., setting up servers, databases, networks)


 Configuration Management (ensuring consistent environments)
 Application Deployment (automating deployments across environments)
 Rolling Updates & Rollbacks (zero-downtime releases)
 Security & Compliance Enforcement

Key Components of Ansible

 Inventory: Defines target servers (e.g., staging, production).


 Playbooks: YAML-based scripts that describe the automation tasks.
 Modules: Built-in functions for managing configurations (e.g., yum, apt, docker).
 Roles: Reusable units of configuration for modular playbooks.
 Ad-Hoc Commands: Quick execution of tasks without playbooks.

Integrating Ansible with CI/CD Tools

Ansible can be integrated with tools like:

 Jenkins – Automate deployments via Ansible playbooks.


 GitLab CI/CD – Use Ansible in GitLab pipelines.
 Terraform – Manage infrastructure before deploying with Ansible.
 Kubernetes (K8s) – Automate container deployments.

Why Use Ansible for CD?

 Simplifies deployment automation


 Ensures consistency across environments
 Reduces manual errors & downtime
 Works with cloud platforms (AWS, Azure, GCP)
 Supports declarative & imperative configurations

You might also like