Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
22 views71 pages

Jenkins

Uploaded by

King Of Luck
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views71 pages

Jenkins

Uploaded by

King Of Luck
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

Jenkins

Nitin Shukla
Asst Professor
Department of CSE & IT
JIIT, Noida
Introduction
• Continuous integration systems are a vital part of any
Agile team because they help enforce the ideals of
Agile development
• Jenkins, a continuous build tool, enables teams to
focus on their work by automating the build, artifact
management, and deployment processes
• Jenkins’ core functionality and flexibility allow it to fit
in a variety of environments and can help streamline
the development process for all stakeholders
involved
CI - Defined
• “Continuous Integration is a software development
practice where members of a team integrate their
work frequently, usually each person integrates at
least daily - leading to multiple integrations per day.
Each integration is verified by an automated build
(including test) to detect integration errors as quickly
as possible” – Martin Fowler
CI – What does it really mean?
• At a regular frequency (ideally at every commit),
the system is:
– Integrated
• All changes up until that point are combined into the project
– Built
• The code is compiled into an executable or package
– Tested
• Automated test suites are run
– Archived
• Versioned and stored so it can be distributed as is, if desired
– Deployed
• Loaded onto a system where the developers can interact
with it
CI – The tools
• Code Repositories
– SVN, Mercurial, Git
• Continuous Build Systems
– Jenkins, Bamboo, Cruise Control
• Test Frameworks
– JUnit,Cucumber, CppUnit
• Artifact Repositories
– Nexus, Artifactory, Archiva
Jenkins
• Branched from Hudson Java based
• Continuous Build System Runs in servlet
container
– Glassfish, Tomcat
• Supported by over 400 plugins
– SCM, Testing, Notifications, Reporting, Artifact Saving,
Triggers, External Integration
• Under development since 2005
• http://jenkins-ci.org/
Jenkins - History
• 2005 - Hudson was first release by Kohsuke
Kawaguchi of Sun Microsystems
• 2010 – Oracle bought Sun Microsystems
– Due to a naming dispute, Hudson was renamed to
Jenkins
– Oracle continued development of Hudson (as a
branch of the original)
CI-Workflow
CI-Workflow with Jenkins
Different stages of adopting
Continuous Integration
Different stages of adopting
Continuous Integration
Different stages of adopting
Continuous Integration
Different stages of adopting
Continuous Integration
Different stages of adopting
Continuous Integration
CI/CD Environment
The Pipeline
• Continuous Integration
– The practice of merging development work with the
main branch constantly.
• Continuous Delivery
– Continual delivery of code to an environment once
the code is ready to ship. This could be staging or
production. The idea is the product is delivered to a
user base, which can be QAs or customers for review
and inspection.
• Continuous Deployment
– The deployment or release of code to production as
soon as it is ready
Jenkins
• Jenkins is a continuous integration and build server.

• It is used to manually, periodically, or automatically


build software development projects.

• It is an open source Continuous Integration tool written


in Java.

• Jenkins is used by teams of all different sizes, for


projects with various languages.
Jenkins’ Master and Slave Architecture
Jenkins Management
• Adding a slave node to Jenkins
• Building Delivery Pipeline
• Pipeline as a Code
• Implementation of Jenkins in the Projects
Adding a slave node to Jenkins
Jenkins Architecture
• Single Server
Single Server
• This single Jenkins server was not enough to
meet certain requirements like:

– Sometimes you might need several different


environments to test your builds. This cannot be
done by a single Jenkins server.

– If larger and heavier projects get built on a regular


basis then a single Jenkins server cannot simply
handle the entire load.
Jenkins Distributed Architecture
• Jenkins uses a Master-Slave architecture to
manage distributed builds.

• In this architecture, Master and Slave


communicate through TCP/IP protocol.
Jenkins Distributed Architecture
How Jenkins Master and Slave
Architecture works?
How It Works?
• Jenkins checks the Git repository at periodic intervals
for any changes made in the source code.

• Each builds requires a different testing environment


which is not possible for a single Jenkins server. In
order to perform testing in different environments,
Jenkins uses various Slaves.

• Jenkins Master requests these Slaves to perform


testing and to generate test reports.
Jenkins Master Node
• Scheduling build jobs.

• Dispatching builds to the slaves for the actual execution.

• Monitor the slaves (possibly taking them online and offline


as required).

• Recording and presenting the build results.

• A Master instance of Jenkins can also execute build jobs


directly.
Jenkins Slave Node
• It hears requests from the Jenkins Master instance.

• Slaves can run on a variety of operating systems.

• The job of a Slave is to do as they are told to, which


involves executing build jobs dispatched by the Master.

• You can configure a project to always run on a


particular Slave machine or a particular type of Slave
machine, or simply let Jenkins pick the next available
Slave.
How to setup Jenkins Master and
Slaves?
• Go to the Manage Jenkins section and scroll down to the
section of Manage Nodes.
How to setup Jenkins Master and
Slaves?
• Click on New Node
How to setup Jenkins Master and
Slaves?
• Give a name for the node, choose the Permanent Agent
option and click on Ok.
How to setup Jenkins Master and
Slaves?
• Enter the details of the node slave machine.
– Name: Name of the Slave. e.g: Test
– Description: Description for this slave (optional). e.g: testing
slave
– No. of Executors: Maximum number of Parallel builds Jenkins
master perform on this slave. e.g: #2
– Remote root directory: A slave needs to have a directory
dedicated to Jenkins. Specify the path to this directory on the
agent. e.g: /home/
– Usage: Controls how Jenkins schedules builds on this node. e.g:
Only build jobs with label expressions matching this node.
– Launch method: Controls how Jenkins starts this agent. e.g:
Launch agent agents via SSH
• Enter the details of the node slave machine.
Create a Job in Jenkins
Normal Job
• Go to the “New Item” option at the top left-hand side of your main
dashboard.
Normal Job
• Here enter the
name of the item
you want to
create. Let us use
the “Hello world.”
• Select ‘Freestyle
project’ as the
option for this new
Item.
• Click OK.
Normal Job
Normal Job
Normal Job
Normal Job
Normal Job
Normal Job
Normal Job
Normal Job
GIT Job
• Under Source Code Management(SCM) tab,
• Select Git as a repository source and enter your
Git Repository URL.
• In case you have your repository created locally, it
is permissible to use a local repository.
• Suppose the GitHub repository you are using is
private. In that case, Jenkins will validate the
login credentials with GitHub, and upon
successful validation, it will then pull the source
code from your GitHub repository.
GIT Job
GIT Job
GIT Job
GIT Job
GIT Job
GIT Job
GIT Job
GIT Job
Building Delivery Pipeline
Delivery Pipeline
• A delivery pipeline is an automated expression of your
process for getting software from version control right
through to your users and customers.
• Every change to your software (committed in source
control) goes through a complex process on its way to
being released.
• This process involves building the software in a reliable
and repeatable manner, as well as the progression of
the built software (called a "build") through multiple
stages of testing and deployment.
Delivery Pipeline
• A delivery pipeline consists of the stages an
application goes through from development
through to production.

• These stages may vary from one organization to


another, however, and may also vary from one
application to another based on the organization’s
needs, software delivery process, and maturity

• The level of automation may also vary.


Stages of a Pipeline
• Build/Deploy
• Commit
• Test
• Stage
• Deploy
Stages of a Pipeline: Build/Develop
• A build/develop process performs the following:
– Pulls source code from a public or private repository.
– Establishes links to relevant modules, dependencies, and
libraries.
– Builds (compiles) all components into a binary artifact.
• Depending on the programming language and the
integrated development environment (IDE), the build
process can include various tools. The IDE may offer
build capabilities or require integration with a separate
tool. Additional tools include scripts and a virtual
machine (VM) or a Docker container.
Stages of a Pipeline: Commit
• Commit tasks typically run as a set of jobs,
including:
– Compile the source code
– Run the relevant commit tests
– Create binaries for later phases
– Perform code analysis to verify health
– Prepare artifacts like test databases for later
phases
Stages of a Pipeline: Test
• During the test phase, the completed build
undergoes comprehensive dynamic testing. It
occurs after the source code has undergone
static testing. Dynamic tests commonly
include:
– Unit or functional testing—helps verify new
features and functions work as intended.
– Regression testing—helps ensure new additions
and changes do not break previously working
features.
Stages of a Pipeline: Test
• Additionally, the build may include a battery of tests for
user acceptance, performance, and integration. When
testing processes identify errors, they loop the results
back to developers for analysis and remediation in
subsequent builds.

• Since each build undergoes numerous tests and test


cases, an efficient CI/CD pipeline employs automation.
Automated testing helps speed up the process and free
up time for developers. It also helps catch errors that
might be missed and ensure objective and reliable
testing.
Stages of a Pipeline: Stage
• The staging phase involves extensive testing for all code changes to
verify they work as intended, using a staging environment, a replica
of the production (live) environment. It is the last phase before
deploying changes to the live environment.

• The staging environment mimics the real production setting,


including hardware, software, configuration, architecture, and
scale. You can deploy a staging environment as part of the release
cycle and remove it after deployment in production.

• The goal is to verify all assumptions made before development and


ensure the success of your deployment. It also helps reduce the risk
of errors that may affect end users, allowing you to fix bugs,
integration problems, and data quality and coding issues before
going live.
Stages of a Pipeline: Deploy
• The deployment phase occurs after the build passes all testing and
becomes a candidate for deployment in production. A continuous delivery
pipeline sends the candidate to human teams for approval and
deployment. A continuous deployment pipeline deploys the build
automatically after it passes testing.

• Deployment involves creating a deployment environment and moving the


build to a deployment target. Typically, developers automate these steps
with scripts or workflows in automation tools. It also requires connecting
to error reporting and ticketing tools. These tools help identify unexpected
errors post-deployment and alert developers, and allow users to submit
bug tickets.

• In most cases, developers do not deploy candidates fully as is. Instead,


they employ precautions and live testing to roll back or curtail unexpected
issues. Common deployment strategies include beta tests, blue/green
tests, A/B tests, and other crossover periods.
Jenkins Delivery Pipeline
• Pipeline provides an extensible set of tools for
modeling simple-to-complex delivery pipelines "as
code" via the Pipeline Domain Specific Language (DSL)
syntax.
• Typically, the definition of a Jenkins Pipeline is written
into a text file (called a Jenkinsfile) which in turn is
checked into a project’s source control repository
• This is the foundation of "Pipeline-as-Code"; treating
the continuous delivery pipeline a part of the
application to be versioned and reviewed like any other
code.
Jenkins Delivery Pipeline
• Creating a Jenkinsfile provides a number of
immediate benefits:
– Automatically create Pipelines for all Branches and
Pull Requests
– Code review/iteration on the Pipeline
– Audit trail for the Pipeline
– Single source of truth for the Pipeline, which can
be viewed and edited by multiple members of the
project.
Jenkins Delivery Pipeline
• While the syntax for defining a Pipeline, either
in the web UI or with a Jenkinsfile, is the
same, it’s generally considered best practice
to define the Pipeline in a Jenkinsfile and
check that in to source control.
A sample Jenkinsfile
A sample Jenkinsfile
① agent indicates that Jenkins should allocate an
executor and workspace for this part of the Pipeline.

② stage describes a stage of this Pipeline.

③ steps describes the steps to be run in this stage

④ sh executes the given shell command

⑤ junit is a Pipeline step provided by the plugin:junit


[JUnit plugin] for aggregating test reports.
Why Pipeline?
• Pipeline adds a powerful set of automation tools onto
Jenkins, supporting use cases that span from simple
continuous integration to comprehensive continuous
delivery pipelines.

• By modeling a series of related tasks, users can take


advantage of the many features of Pipeline:
– Code: Pipelines are implemented in code and typically
checked into source control, giving teams the ability to
edit, review, and iterate upon their delivery pipeline.
– Durable: Pipelines can survive both planned and
unplanned restarts of the Jenkins master.
Why Pipeline?
– Pausable: Pipelines can optionally stop and wait
for human input or approval before continuing the
Pipeline run.
– Versatile: Pipelines support complex real-world
continuous delivery requirements, including the
ability to fork/join, loop, and perform work in
parallel.
– Extensible: The Pipeline plugin supports custom
extensions to its DSL and multiple options for
integration with other plugins

You might also like