All of you can use
Snipboard.io to share your issues with the trainer or with the
participants
Please click on the link
https://snipboard.io/
Usecase
to create LMS
Waterfall
[requirements] - 2 months
[dev]- 2months
[unit test] - 2 months
[deploy] -2months
[go live] - 2months
Agile
Product Owner -> list of stories
Scrum master
Scrum meetings for dev,test,devops
Backlog user stories
2 weeks
----------------------
Devops Tools
-----------------------
Continous Development -Git
Continous Build-Maven
Continous Integration- Jenkins
Continous Code review-Sonarqube
Continous Deploy-Docker,Kubernetes
Continous Delivery-Ansible
Continous Monitor-ELK
Continous Infrastructure-Terraform
------------
---------------
Java
Dotnet
Php
Python
Angular
Nodejs
-------------------
Attributes
--------------------
Idea
methodology
Culture
Mindset
Discipline
Interaction
Colloboration
Instant or quick feedback
Sharing
actual Outcome vs expected
Organization
-----------------------
Google drive
\
softwares , materials
Running notes
Installation - Covered
OS -Windows or Linux
Cloud- AWS ( popular )
instructions
Theory ->
Handson ->
Runing notes
In anyone of the lab environment if you see capital letters
being type ofd please ensure your caps lock is turned off , if
you still
See that you are getting only capital letters when you are
typing
Then press and hold shift key for 1 minute and release it .
Now you can start typing small letterst as usual
Tools For Each DevOps Stage
● Continuous Development - Github
● Continuous Build -Maven
●
● Continuous Integration -Jenkins
● Continuous Testing - Selenium
● Continuous Deployment - Ansible / puppet
● Continuous Code review -Sonarqube
● Continuous Delivery - Docker , Kubernetes , Terraform
● Continuous Monitoring - Nagios
AWS Setup Instructions
Create an AWS free tier account from the url
https://aws.amazon.com/free/
Use debit or credit card (it will not charge your card as long as
you stay within the free tier services)
Please do not explore any other services ( use only the ones
which I teach in the session )
Once you signed up for the aws free tier
We can start creating one instance and start practicing our
tools
Lets get started by signing in
https://aws.amazon.com/console/
Click on sign in
After giving username , give password and login
Please ensure that the ppk file got downloaded , because you
will need this file to connect to the instance using putty ( ssh
tool)
Now we should be able to launch the instance
Now we can see the running instances .
In order to connect to the running instances , We need to
download the putty tool
https://www.chiark.greenend.org.uk/~sgtatham/putty/
latest.html
Download and install the tool. 64 bit
Once installed , click finish , putty will be successfully installed
Now let's open PuTTY
Give hostname as ec2-user@ we should now select the Public
IPv4 address from the AWS console
Lets go back to the AWS console
Click on the instance id
Now you can copy the Public IPv4 address
Now go back to the putty and paste the Public IPv4 address
Now go the SSH -> Auth
Locate the ppk file which we saved from AWS
Once you have selected the ppk file
Now go back to the session
And save the session with a new name
Now click save and click open
Now you will get another window
In order for the terminal to keep alive
It displays few details , now type ctrl+c
Linux commands
sudo su :user will get admin permissions
pwd : present working directory
Gives the present working directory
Mkdir : make directory will create a directory
Mkdir foldername
mkdir devops
ls : listing the created folders
rm -r : ( this will remove the folder)
vi hello ( visual editor to create text documents )
Once it is opened
Please typ -e “i” on the keyboard to insert some keystrokes
We can start typing now ,
Type anything as you wish .
Now press escape on the keyboard
Now type
:wq (after pressing escape , saving and quitting)
And press enter to come out
Sometimes we may get permission issues (then type :q! , ! is
for forcing)
Cat : used to view the text inside the files
Example : cat hello
The above will display the contents of the hello file
rm: It is used to remove the files ( please note this is without -
r , which we used to delete folder
Example : rm hello
mv : to rename a file
Example: mv hello hi
Hello is the source filename
Hi is the destination filename
clear: to clear the terminal
ls -ltr : to view the files along with the permissions and
timestamp
ls -a : this for hidden files
chmod : is for changing the file permissions
( changing the mode , chmod )
Please remember , for file permissions ,
We need to chmod along with sticky bit
Sticky Bit : three letters for rwx ( read , write , execute)
We have numbers designated for read write and execute.
4 -read
2- write
1-execute
Mainly , we have three types of users in linux which we will
consider all the time
Owners:
Groups:
Others:
So whenever we consider the chmod command we should
consider the above three types of users .
chmod 444 filename ( this gives all the types of users , i.e
owners,groups,others read only access ,
Only file can read , but cannot be written nor can be executed
by any users)
chmod 222 filename ( this gives all the types of users , i.e
owners,groups,others write only access ,
Only file can be written)
chmod 111 filename ( this gives all the types of users , i.e
owners,groups,others write only access ,
Only file can be executed)
Suppose if I want to give read and write only
chmod 666 filename ( this gives both read and write access )
For full permissions : chmod 777 ( read write and execute)
(lab issues : press left shift for long time ,issue gets resolved )
apt-get update : to make your software repository updated
apt-get install softwarename : to install any software
apt-get remove softwarename : to remove any software
Check the status of the nginx server
systemctl status nginx : to check the status of the server (type
ctrl+c to come out of the prompt)
systemctl stop nginx : to stop the server(type ctrl+c to come
out of the prompt)
systemctl start nginx : to start the server(type ctrl+c to come
out of the prompt)
Git Setup
how will we merge the code ?
versioning ( who coded what ? author information and time
stamp )
conflicts ? over riding of the code
backtracking
access control
authorization
authentication
integrity ( data should not be tampered (no data corruption))
encryption
ease of use
Intuitive
handy tool
logs
The above are the features that git support
Ensure that you have git client installed
sudo su (get the root permissions to install)
If you want to install git on any linux machine , its yum install
git / apt-get install git
Type git to see if it's installed .
To keep terminal alive
Create GitHub Account
https://github.com/
Please give your email , username and choose a password
Verify the email and you should be able to access
Creating Practice Directory
For Windows
1. Go to c drive
2. Create a folder called gitpractice
For Linux ( sudo su )
1. cd home/username/
2. mkdir gitpractice
3. cd gitpractice
4. pwd - it returns the directory name
home/username/gitpractice
yum install git -y -( to install git, we give this command)
Git status
Check if this a valid git repository
.git
|
tracks your repo
|
tracks the branches
|
keep tracks of new changes
|
initialize repo and keeps checking for new changes
|
On the whole we can say (.git) folder maintains meta data
( Tracks the whole repository and keeps intact of the
information)
Install tree software to view folders more clearly
Git init
You can use ‘git init’ to initiate git in the directory we just
created.
apt-get install tree -y
(or)
yum install tree -y
cd .git
Type tree
.
├── branches - info about the the branches
├── config - contains configuration info
├── description - general description
├── HEAD -points out to the head
├── hooks - Lets not worry much about this :)
│ ├── applypatch-msg.sample
│ ├── commit-msg.sample
│ ├── post-update.sample
│ ├── pre-applypatch.sample
│ ├── pre-commit.sample
│ ├── prepare-commit-msg.sample
│ ├── pre-push.sample
│ ├── pre-rebase.sample
│ └── update.sample
├── info
│ └── exclude
├── objects
│ ├── info
│ └── pack
└── refs - commits info
├── heads
└── tags
cd ..
Pwd
/home/ec2-user/gitpractice
One you’ve initiated the directory you can go back and check
the folder , we have lot of contents related to tracking the
files ( config files )
Ls -a ( it will list all files , hidden files as well , .git )
Creating Hello.Java Practice File
Create a file inside git practice (ensure you are at the root
directory of git practice)
1. vi Hello.java
2. Press i to insert
3. Insert the following snippet
class Hello
{
Public static void main(String args[])
{
System.out.println(“hello”);
}
}
4. Press escape
Type :wq and press enter to save /quit
Git status
You can use Git Status command to check the status of the git
repo.
1. Type ‘Git Status’ (make sure you are in your initialized
directory)
2. You will see it shows an untracked file named ‘Hello.Java.
Next we will need to add the untracked file to be tracked.
Git Add
Git Add can be used to add untracked files to be tracked and
put into a staging area before getting committed. ( pre-
commit)
1. Type - git add Hello.java
2.
3. Now type - git status
4. You will see the Hello.Java file is now listed under
tracked files and staged.
After staging
Lets delete the Hello.java
rm Hello.java
Now git says Hello.java file is deleted
Git status
Git status says you have deleted Hello.java
Git checkout or git restore
If you delete the Hello.java after adding using git add
Hello.java
Your file will be tracked, so that in case accidentally deleted
you can restore by ‘git checkout -- Hello.java ( please note
there is a space after -- )
Git rm --cached ( unstaging)
Git rm --cached ‘FILE_NAME’ will allow you to untrack a
specific file that you’ve already added to be tracked.
git rm --cached Hello.java
Now since we removed Hello.class from the staging ,
You can delete the Hello.class now ,
rm Hello.java
you git status will no more complain or alert that Hello.class
is deleted , because Hello.java is no more tracked because we
unstaged it
Git commit
Git commit is used to commit your changes of your tracked
files, basically saving the current state of the tracked files.
1. Return to the directory we created earlier called ‘git
practice’
2. Type git commit -m “made changes to Hello.java”
To set your github configuration to commit using your
information you can use the commands below.
Two days to achieve
1st Approach
1. git config --global user.email
“
[email protected]”
2. git config --global user.name “YOUR_NAME”
3. git commit --amend --reset-author
If it opens a editor , just type :wq save and exit
(or)
2nd Approach
git config --global --edit
Now do the changes to the following
To exit
Type ctrl+x
Type y ( yes as an acknowledgement)
Now once you come out of the editor
The above command will reset the author information
So henceforth you will see a clear author information
Practice Exercises
Exercise 1 :
1. Create a new directory
2. initialize the directory to use git
3. Create a test file in the directory
4. Check the status
5. Add to staging
6. Unstage the file
Exercise 2:
1. Experiment what happens if you delete an unstaged file
2. Experiment what happens if you delete a staged file
Exercise 3:
1. Create a file
2. Stage the file
3. Try committing the staged files
Now go to github.com
Open with your credentials : username and password
Once you login , you need to create a remote repository
On the extreme right hand side , you will see a + symbol
Click on it and add a new repository
Give a repository name
Select public ( by default)
Create repository
Now it will show the steps , after creating the repository
The common steps that you need to follow on your local
prompt are the following
( Note : Please use master instead of main , because we are
using
Master branch not main branch)
Config file will be updated with the remote repo ( destination
repository ) once we run the following command
git remote add origin
https://github.com/SrikanthPB/sampleabc.git ( repo name ,
please give your repository name not mine :P)
While pushing , git will ask you to enter the credentials ,
Please enter the credentials
Password is deprecated from aug 2021 , we need to generate
token
So let's use access token instead of password
Click home page settings
Copy the token and come to the password
And paste the token
Note: please note that select all the above scopes for you to
work properly with the authentication
It may open a new page where you will enter the credentials ,
please enter
And authorize the git to use it .
Now you can see everything is pushed to your remote ,
Go Back to the github and check if everything is pushed
To check the branches at local gitclient
Use
Git branch -l ( local)
Git branch -r ( remote )
Git branch -a(all branches)
Branching strategy
--------
Always we keep our master branch untouched .
We can always create a new branch preferably matching to
your JIRA ticket ( or any bug tracking tool number)
git branch feature-101
This will create branch
In order to move to the feature-101 branch , give the
following command
git checkout feature-101
Start fixing the issue on this branch -> and commit it ( later we
can merge if wanted)
Modify the existing file add some text
Git add Hello.java
git commit -m “Added”
Push to the remote origin feature-101 branch by giving
following command
Git push -u origin feature-101
Sometimes , if we want to ignore few unnecessary files , we
can use the .gitignore
vi .gitignore
Add something with wildcard so that all such kinds of files will
be ignored
Like
*.bak ( all bak will files will be ignored)
*.txt ( all txt files will be ignored)
Please add .gitignore file to staging and commit it as well
Git add .gitignore
Git commit -m “added”
Push to the remote ( feature-101)
Whenever we type git status
Usually git can pick up all the local related commit information
so , after a new commit
It says your local repo is ahead of the remote / origin .
When want an information whether your remote repo is
having a commit ahead of your local branch , we should use
git fetch ( so that it fetches all the information about the
newest commits happened at the remote repo)
When we want to make a merge go to the branch where we
want the merge to be taken place
Go to the master branch
Git checkout master
Now type git merge feature-101 ( so you are getting all the
files from feature-101 to master)
Difference between git fetch and git pull
Git fetch will get only the commit information
Git pull will do merging and also fetch
Git pull = git fetch + git merge
please make some changes in the remote branch master
branch
come to local try to push some changes you will get error like
! [rejected] master -> master (fetch first)
error: failed to push some refs to
'https://github.com/srikanthprathivadi/practicegit.git'
hint: Updates were rejected because the remote contains
work that you do
hint: not have locally. This is usually caused by another
repository pushing
hint: to the same ref. You may want to first integrate the
remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for
details.
lets try fetching first
now you will have latest remote commit inside (meta data
folder .git )
inside /refs /heads/master
cat master ( this will show lastest commit id did at the remote
branch )
try pushing again ( will not work , because we need to merge
the latest code changes
from remote into your local )
try
Git merge master origin/master
( this will merge remote to local changes , resolve the conflicts
And try again it will work
Now lets try to avoid two commands
I.e git fetch and git merge
git pull ( you will see it will combine both git merge and git
fetch)
(difference between git pull and git fetch : Git fetch will get
only
Meta data , git pull will get the entire data itself and merges
With your local)
mer
Stash
------
When we unintentionally do coding on a wrong branch , We
dont need to rewrite again
Instead we can use stash
For this
Example :
Vi Hello.java
I typed , hi from master branch ( my intention is master , but
am in feature 101 branch)
What I can do is in this type
I can type
Git stash or git stash save
This will steal all data and put it into a temporary location .
Now we can go to the other branch and type
Git stash list ( and see all the list of stash available ) , In order
to make it effective
We can type
git stash apply stash@{0} ( this is the one which branch we
want the changes to come there)
Lesson of day1 :
The most famous commands in git from adding to ending is
Git init
Git add
Git commit
Git push
To remove use
Git stash pop stash@{0}
Use any of the stash @
Git cherry-pick
Git cherry pick is used when you already committed the code (
stash is used before commit)
But once you commit the code in the wrong branch we have
to use the cherry-pick
Now, we will go the right branch and you can use the
following cherry-pick command
Git cherry-pick commit id
Git log ( will help you to cherry-pick any of the commit id , but
mostly we will pick up the recent commit id )
In my case ,
Git cherry-pick 9eef
After the above command , It will get the modified file , Now I
can go ahead and use the same commit
Exercise 4 : Try committing a new file on a feature branch and
then use the cherry-pick command to get the committed code
from another branch to this branch.
Difference between stash and cherry-pick : stash is used
before commit (only after staging) ,
Cherry Pick is used after the commit
Git show
You can use git show to see particular changes made during
the commit .
Git show commit id
Excercise 5 : Please use git show commitid (give your commit
id) , to see changes you made during a particular commit
Git reset
Git reset is used when we want to rewind back to old commit
( so now your head will be pointing towards to the older
commit)
[ gh43] [6454] []3232]
||
HEAD
Currently my git head is pointing at gh43 , if I want it to
rewind / goback to earlier commit
I can use
Git reset --hard 6454
This will take me back to the earlier commit ( even in logs you
can check)
Now after reset
[ gh43] [6454] []3232]
||
HEAD
Git add all files
Git add -A
Git add filename1 filename2 ( this will add these two files )
Git commit all files
Git commit -m “commit all files” -a
Git commit -m “commit file1 and file2” filename1 filename2
Git diff branchname1..branchname2
We can check the differences between the two branches
Exercise 6 : try to find out the differences between the
branches
Observe the changes
Git GUI
-------
This is a graphical interface tool , Which we can use with
Windows and Mac only.
We can do many things like staging , adding and committing
You can make changes and see them by clicking rescan inside
GUI and you can see unstaged , stage to commit from the
commit option on the top ,
Once you see the changes in the staged
Commit by giving message and push it ( while push ti will ask
for a branch)
Exercise 7 : Please add somes changes on local , stage them ,
commit them
Deleting a branch
Git branch -d branchname --force
---------------------------------
Maven
( pre requisites )
Eclipse
Maven and Java ( check my google drive it has all the
softwares to be downloaded ) or provided lab environment
Maven is a build tool which is widely used for dependency
management , packaging , versioning
And execute most of the SDLC goals like
software development lifecycle goals
maven commands are simple one word commands , helping
developers to reduce the manual workload
If anyone facing the error like
[ERROR] Error executing Maven. [ERROR] java.lang.IllegalStateException: Unable to load cache item
[ERROR] Caused by: Unable to load cache item [ERROR] Caused by: Could not initialize class
com.google.inject.internal.cglib.core.$ReflectUtils
Ensure that you have the relevant packages installed, e.g...
sudo yum install openjdk-8-jre openjdk-8-jdk
And then make sure that maven uses the correct version. The simplest way to
do this is to add the following line to the end of ~/.mavenrc (Creating it if
needed):
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/
compile->give mvn compile ( simple command replacing
manual and monotonous commands everytime)
Ensure you have pom.xml in the directly which you will give
the following goals
install-> supplying the dependency for your program will be a
challenge
package-> package the code for portability
unit test-> validation done by developer themselves
deploy-> readily runnable code
clean-> clean the above generated files
Clean
Compile
Install
Package
Test
Deploy
In order to automate all the build goals during the software
development life cycle we will use maven
clean
install
package
compile
test
Thats why we use maven
Windows installation
To get started with the installation , you will need to
download maven and extract it
https://github.com/SrikanthPB/mavenwebhook
Inside the lab :
sudo su
yum install maven -y
mkdir maven
cd maven
git clone https://github.com/SrikanthPB/mavenwebhook
( please don't change the username SrikanthPB,it's a public
Repo ,so you guys can download the same )
ls
cd mavenwebhook
mvn install (now build will be successful)
When you type ls here , you should see pom.xml
Downloading apache maven ( without using google drive)
https://mirrors.estointernet.in/apache/maven/maven-
3/3.8.1/binaries/apache-maven-3.8.1-bin.zip
After downloading in the downloads folder
For example :
I downloaded in my downloads folder
Now copy the following path
C:\Users\Administrator\Downloads\apache-maven-3.6.3-bin
( copy the path and keep it ready)
Inside the bin we have the mvn command which is used for
running our maven projects.
Setting up the path .Inside the environment variables , so that
we can run maven command from anywhere in our system .
Right click on this PC go to advanced system settings and click
on
environment variables , so inside the environment variables
Go to path and click edit to edit the path variable
and inside the path variable click new and add the following
C:\Users\sri.m\Downloads\apache-maven-3.8.1-bin\apache-
maven-3.8.1\bin ( which we copied earlier)
Click ok for all
Maven for linux installation
---------------------
Yum install maven (centos)
Apt-get install maven (ubuntu)
Pom.xml ( project object model)
In order to construct a house
construction plan - blue print
In order to build a software
according to your plan we use
maven with help of POM.xml
War- web archive ( compressing all the web application
components and arranging them chronologically inside a
single war file . like google.war)
Ear - enterprise archive ( compressed version of enterprise
application components and arranging them chronologically
inside a single ear file . like google.ear)
Jar - java archive ( compressed version of java application
components and arranging them chronologically inside a
single jar file . like google.jar)
Major version - 1.0.0 to 2.0.0 to 3.0.0
Minor version - 1.1.0 to 1.2.0 to 1.3.0
Bug fix version- 1.1.1 to 1.1.2 to 1.1.3
<project xmlns="http://maven.apache.org/POM/4.0.0"
< these lines are to connect central
xmlns:xsi="http://www.w3.org/2001/XMLSchema-
instance" < repo where we have plugins dependencies to
be downloaded >
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion> < the model
version will help to give unique version name>
<groupId>com.mkyong</groupId> < its reverse of your
website , like reverse of google.com is com.google>
<artifactId>CounterWebApp</artifactId> < this is the
name of your application>
<packaging>jar</packaging> < its for the packaging , it
can be war,ear,jar>
<version>1.0-SNAPSHOT</version> < incremental
version 1.0 or 1.1 or 1.1.1>
<name>CounterWebApp Maven Webapp</name> <
name of the application>
<url>http://maven.apache.org</url> <url of the
application>
<properties> <properties help to define our version of
dependencies and change them accordingly>
<spr
<jdk.version>1.7</jdk.version>spring.version>4.1.1.RELEASE</
spring.version>
<jstl.version>1.2</jstl.version>
<junit.version>4.11</junit.version>
<logback.version>1.0.13</logback.version>
<jcl-over-slf4j.version>1.7.5</jcl-over-slf4j.version>
</properties>
<dependencies> < dependencies are the libraries
which help us to run the program , imporper dependencies
will result in compilation error>
<!-- Unit Test -->
<dependency>
<groupId>junit</groupId> <will be the name
of the dependency>
<artifactId>junit</artifactId> < be the artifcat
id . like jar file name junit.jar>
<version>${junit.version}</version> < here
junit version will be pointed to the above properties where
we have junit version 4.11>
</dependency>
<!-- Spring Core -->
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>${spring.version}</version>
<exclusions>
<exclusion> <exclude what you dont
require>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jcl-over-slf4j</artifactId>
<version>${jcl-over-slf4j.version}</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>${logback.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-webmvc</artifactId>
<version>${spring.version}</version>
</dependency>
<!-- jstl -->
<dependency>
<groupId>jstl</groupId>
<artifactId>jstl</artifactId>
<version>${jstl.version}</version>
</dependency>
</dependencies>
<build>
<finalName>CounterWebApp</finalName>
<plugins>
<!-- Eclipse project -->
<plugin> <define the plugins , different
plugins for different goals like clean , install ,deploy>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-eclipse-plugin</artifactId>
<version>2.9</version>
<configuration>
<!-- Always download and attach
dependencies source code -->
<downloadSources>true</downloadSources>
<downloadJavadocs>false</downloadJavadocs>
<!-- mvn eclipse:eclipse -
Dwtpversion=2.0 -->
<wtpversion>2.0</wtpversion>
</configuration>
</plugin>
<!-- Set JDK Compiler Level -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId> < this is
compiler plugin>
<version>2.3.2</version>
<configuration>
<source>${jdk.version}</source>
<target>${jdk.version}</target>
</configuration>
</plugin>
<!-- For Tomcat -->
<plugin>
<groupId>org.apache.tomcat.maven</groupId>
<artifactId>tomcat7-maven-plugin</artifactId> < this is
tomcat plugin>
<version>2.2</version>
<configuration>
<path>/CounterWebApp</path>
</configuration>
</plugin>
</plugins>
</build>
</project>
Sample project , please download from the following
https://github.com/SrikanthPB/mavenwebhook
Present working directory : your username will be different
pwd
/home/troubleshooting/maven/mavenwebhook
Linux
Lets download using git
Mkdir maven
Cd maven
git clone https://github.com/SrikanthPB/mavenwebhook.git
Cd mavenwebhook
Here you can see pom.xml
We can type the goals from where we see pom.xml
mvn install ( please note second time it takes lesser time
because we got them stored in the local repo)
mvn compile
All the downloaded dependencies will be inside the .m2 folder
cd /root/.m2/repository
[root@ip-172-31-35-182 ~]# ls -a
. .. .bash_history .bash_logout .bash_profile .bashrc .cshrc
.gitconfig .m2 .ssh .tcshrc
[root@ip-172-31-35-182 ~]# pwd
/root
[root@ip-172-31-35-182 ~]# cd .m2
[root@ip-172-31-35-182 .m2]# ls
repository
[root@ip-172-31-35-182 .m2]# cd repository/
[root@ip-172-31-35-182 repository]# ls
aopalliance ch classworlds com commons-cli commons-
lang commons-logging jstl junit org -> these are the
dependencies which maven downloaded for us
[root@ip-172-31-35-182 repository]# clear
[root@ip-172-31-35-182 repository]# pwd
/root/.m2/repository
[root@ip-172-31-35-182 repository]#
Try deleting the repository inside /root/.m2
rm -rf repository
All dependencies will be deleted
Now let's go back to the maven folder ( cd /home/ec2-
user/maven/mavenwebhook ( this is where I stored my
maven project which downloaded from the github)
And run the following
mvn install
Will install all the dependencies
mvn package ( will help to package as per the package
instruction given inside the pom.xml)
Vi pom.xml
<package>jar</package>
<package>war</package> < change from jar to war>
Save it :wq
Now run mvn package
You should see .war file inside the target directory
Now lets try
mvn clean
This will delete the target directory
mvn test
This will run all the test cases and gives the summary
Windows
We can create a maven project from eclipse .
Launch eclipse
Create new maven project , by right clicking on project
explorer
New -> other
We will have now the pom.xml created ( Project object
model)
XML
------
Stands for extensive markup language (the tags inside xml are
user defined)
Unlike html ( which has predefined tags , which means you
cannot use user defined tags)
Every xml tag will have starting and ending
<starting> -- this is the starting of the tag
</starting> → / represents ending of the tag
Between these starting and ending tags we will insert our data
( mostly xml is used for communicating or information passing
scenarios)
Now lets run maven
Right click on the project -> properties -> resource
Copy path from above -> use ctrl+ c to copy
Open a terminal ->
sudo su
Type cd and click edit and select paste
You will get the path copied , looks something like
Cd /home/troubleshooting/eclipse-workspace/com.batch
From this path you can now
Type mvn install ( here you will pom.xml)
Your build should succeed
We have three types of versions
Major verison like 1.0 to 2.0
Minor version like 1.1 to 1.2
Bug feature like 1.1.1 to 1.1.2
Local repo
Everytime when you run maven build by default everything
will get downloaded into
/home/troubleshooting/.m2/repository/
Try deleting them and hit back mvn install again you will
understand
Since we don't have so many artifacts we don't have much
downloaded into the local repo
I.e .M2
In order to experience this
Please download project from
https://github.com/SrikanthPB/mavenwebhook
And mvn install
Now you can see all the libraries in
/home/troubleshooting/.m2/repository/
Try deleting and give mvn install , you will understand
If you cant see m2 folder , click show hidden files
Goals of maven
Go to the project root directory where pom.xml is located .
And type cmd in the top address bar (remove any other path ,
only type cmd)
mvn compile
Compiles all the files
mvn clean
Clean will delete the target directory , go back and check ,
target folder will be deleted
This folder has contents related to previous maven builds .
mvn install
This will install all the dependencies and if any plugins are
required will be downloaded
As per our example , it has taken us 1:18 sec
Lets delete everything in the repo
C:\home\username\.m2\repository
and issue the mvn install again
Observe the difference ,
Basically , since we deleted all in the repo it started
downloading everything again
Exercise : 8 Please use mvn install by deleting everything in
the local repo
C:\Users\Administrator\.m2\repository
And please let us know how much time it has taken post
deleting the repo
mvn package
mvn package is used to mainly package your project as per the
given package option inside pom.xml
Change the <package>jar<package> to war
Exercise:9
Now you can combine goals and give something like
mvn clean package
If you want latest artifacts ( no aging artifacts ) , combine with
clean
mvn test
Test is used for running junit test cases written inside the class
Nexus
---------
After downloading , open pom.xml using Edit plus
Replace the existing lines (1-9 lines) with the following lines
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>1.0.0</modelVersion>
<groupId>com.accenture.mavensample</groupId>
<artifactId>accenture</artifactId>
<packaging>jar</packaging>
<version>5.0.0</version>
<name>my-maven</name>
<url>http://maven.apache.org</url>
We have changed the model version to 1.0.0 ( any new
project starts with 1.0.0
Groupid make it as com.accenture.mavensample
<groupid>com.accenture.mavensample</groupid>
<artifact>accenture<artifact>
Prerequisite for maven is java , ensure java is installed
You can check by typing java in the command prompt
Now time for hands on session ,
C:\Users\Administrator\.m2\repository
You can look at the repository you may have probably some
dependencies which are previously
Downloaded .
Go to the project root directory where pom.xml is located .
And typ was e cmd in the top address bar (remove any other
path , only type cmd)
mvn compile
Compiles all the files
mvn clean
Clean will delete the target directory , go back and check ,
target folder will be deleted
This folder has contents related to previous maven builds .
mvn install
This will install all the dependencies and if any plugins are
required will be downloaded
As per our example , it has taken us 1:18 sec
Lets delete everything in the repo
C:\Users\Administrator\.m2\repository
and issue the mvn install again
Observe the difference ,
Basically , since we deleted all in the repo it started
downloading everything again
Exercise : 8 Please use mvn install by deleting everything in
the local repo
C:\Users\Administrator\.m2\repository
And please let us know how much time it has taken post
deleting the repo
mvn package
mvn package is used to mainly package your project as per the
given package option inside pom.xml
Change the <package>jar<package> to war
Project may fail because web.xml is not present , please
include the plugins as follows line 38 in editplus
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.6</version>
<configuration>
<failOnMissingWebXml>false</failOnMissingWebXml>
</configuration>
</plugin>
Code updated
https://github.com/SrikanthPB/nexus/blob/main/pom.xml
( this always the latest code)
Exercise:9
Now you can combine goals and give something like
mvn clean package
If you want latest artifacts ( no aging artifacts ) , combine with
clean
mvn test
Test is used for running junit test cases written inside the class
Nexus
https://drive.google.com/file/d/
1Hm9lFsWd6QgqiwnPy3QILBmRoa0vavX3/view
Please access the above link and download the nexus
document
Download
https://sonatype-download.global.ssl.fastly.net/repository/
downloads-prod-group/professional-bundle/nexus-
professional-2.14.19-01-bundle.zip
Extract the folder
Go to
C:\devops\nexus-professional-2.14.19-01-bundle\nexus-
professional-2.14.19-01\bin
Type cmd in the address bar
go to the location and run nexus.bat install
now run nexus.bat install
run nexus.bat start
it will take time to start nexus
Now go to browser
localhost:8081/nexus
You can navigate to nexus where it asks you for license
Now for the license , we need to create one
I have created using my gmail
https://my.sonatype.com/profile/licenses
here we need to download the license after giving the
credentials
click download license
you will download a file
sonatype-repository-manager-trial
Now go back to the nexus page where we started on local
http://localhost:8081/nexus
click already have license , upload the license by navigating to
the right directory
now click I agree , license will be installed successfully
The page should look something like above after login
successfully
On left hand side click repositaries
We will create a repositary
Click on add button as following screenshot
now click hosted repositary
Select save at buttom
I will totally create two of them , one for release and one for
snapshot
now get the url of it
http://localhost:8081/nexus/content/repositories/nexus-
release/
http://localhost:8081/nexus/content/repositories/nexus-
snapshot/
you can see the url next to your created repository
Proxy repositary ( this is like a replica of public repo )
For example go to
https://repo.maven.apache.org
and click on maven2 , now copy the url , you will get url like
https://repo.maven.apache.org/maven2/
now click proxy repository ,
Now lets create a group , now click add and create a group
and select
release
snapshot
proxy as below
User roles
We should enforce user according to the roles .
Now go back to the nexus home page , Under security click
roles
Create role , name and id as -devops-admin
Select all from add and add all the repos
Ideally we will create role specific to the project requirement
Next we will go to settings.xml
C:\Users\administrator\Downloads\apache-maven-3.6.2-bin\
apache-maven-3.6.2\conf
<server>
<id>devops-deploy</id>
<username>devops-deploy</username>
<password>srikzz@1</password>
</server>
now ensure that this id matches inside your pom.xml ,
example like below ,then only your upload will work properly
<distributionManagement>
<repository>
<id>nexus-zar-deploy</id>
<name>Internal Releases</name>
<url>http://localhost:8081/nexus/content/repositories/nexus-
release/</url>
</repository>
<snapshotRepository>
<id>nexus-deploy</id>
<name>Internal Releases</name>
<url>http://localhost:8081/nexus/content/repositories/nexus-
snapshot/</url>
</snapshotRepository>
</distributionManagement>
The entire working project is in the github
https://github.com/SrikanthPB/nexus
Go to the project root directory where pom.xml is located .
And type cmd in the top address bar (remove any other path ,
only type cmd)
mvn deploy
Your local repo will be uploaded , and the same be
downloaded as well
I am sharing my jenkins.ppt inside softwares folder , going
forward you guys can check for any material , I will place
inside my C:/Softwares folder
To access
Go to start menu
Run command
\\192.168.19.108
Jenkins
----------
Jenkins requires JDK to run , Java 17 we need to download
https://download.oracle.com/java/17/archive/jdk-
17.0.4.1_windows-x64_bin.msi
While installing java , it might ask oracle credentials
Please give the following credentials
[email protected]
Bun878reef945
Windows installer
https://mirrors.tuna.tsinghua.edu.cn/jenkins/windows-
stable/2.361.2/jenkins.msi
Installation of jenkins using a war file
https://get.jenkins.io/war-stable/2.361.2/jenkins.war
Once downloaded
Go to the downloads folder
Type java -jar jenkins.war
This will extract and make the jenkins up and running on port
8080
Lets install jenkins
https://console.cloud.google.com/?pli=1
You can use two of the gcp accounts
There was an error while loading /home/dashboard?
project=careful-trainer-150607&pli=1&authuser=2.
You are missing at least one of the following required
permissions:
Project
resourcemanager.projects.get
Check that the project ID is valid and you have permissions to
access it. Learn more
Send feedback
Please select a project from the top
Select Myfirstproject
And refresh the page
On the extreme left you have three parallel lines
Click that in the dropdown , go to compute engine and click
vm instances
Create instance
Give a name like name-jenkins ( use your name )
Ubuntu 16.04 LTS - select his OS
For firewall
Allow http
Allow https
Jenkins Itenary
Installation of jenkins -Ubuntu
github jenkins integra
github maven jenkins integra
email jenkins
slack notifications
master-slave setup
Build pipeline
continous code review -sonarqube jenkins
continous testing -selennium
CI-CD pipeline jenkins using ubuntu as base O/S
realtime project
Jenkins installation
Select an ubuntu instance - Ubuntu Server 20.04 LTS (HVM),
SSD Volume Type
Please use the above AWS guide (starting of this page) to
create the pem file and use puttygen to convert
Finally for the hostname give only ip address instead of ec2-
user@ipaddress
Copy the public from the instance that you created just now
When prompted inside the shell terminal . type ubuntu you
should be able to login
Lets take Ubuntu Server 20.04 LTS (HVM), SSD Volume
Type - a
For launching through putty you can mention
ubuntu@ipaddress
Example [email protected]
Please note ec2-user will not work
sudo su
1. Update Ubuntu packages and all installed applications
sudo apt-get update -y
sudo apt-get upgrade -y
2. Next, Install JDK
sudo apt-get install openjdk-11-jdk -y
3. Verify Java version
java -version
4. Add gpg key for jenkins installation ( as sudo su )
wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | apt-key add -
5. Add the repository address to our /etc/apt/sources.list.d file
sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ > \ e>
/etc/apt/sources.list.d/jenkins.list'
6. Update our package list again
sudo apt-get update -y
7. Install Jenkins
sudo apt-get install jenkins -y
Verify the installation by typing
systemctl status jenkins
Press escape key and :wq if you are unable to come from the
above screen to normal terminal prompt
For troubleshooting please use the restart jenkins command
systemctl restart jenkins ( this might resolve issues some
times)
Now give public ip address:8080 in the browser
(public ip of the ec2 instance)
You need to open 8080 from security group
systemctl restart jenkins
If everything is good
If asked for password
Open a terminal and give the following command to get the
password
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Give the password and install suggested plugins
Create username and password , please give email address
Click next and start using jenkins ( no changes to be done )
Incase if you guys face issues with username and password
Go to
cd /var/lib/jenkins
sudo vi config.xml
Enter i ( to get into edit mode)
Make <useSecurity>false</useSecurity> xml tag as false
Press escape
:wq ( save and quit) enter
systemctl restart jenkins
Now you can access localhost:8080 without issues
systemctl status jenkins ( this will show jenkins as active)
If anyone faces the following issue
Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-
frontend), is another process using it?
sudo rm /var/lib/apt/lists/lock
apt-get update
Should fix issue
How to create firewall
jes
Description
Logs
Turning on firewall logs can generate a large number of logs
which can increase costs in Stackdriver. Learn more
On
Off
Networkdefault
1000
Priority *
Priority can be 0 - 65535
DirectionIngress
Action on matchAllow
Specified service account
Targets
Service account scope
In this project
In another project
Compute Engine default service account
Target service account
IP ranges
Source filter
0.0.0.0/0
Source IP ranges *
None
Second source filter
Protocols and ports
Allow all
Specified protocols and ports
tcp :
8080
udp :
Other protocols
---------------
Zone
Zone is permanent
Machine configuration
Machine family
Machine types for common workloads, optimized for cost and
flexibility
Series
CPU platform selection based on availability
Machine type
vCPU
1 shared core
Memory
4 GB
GPUs
-
CPU platform and GPU
Confidential VM service
Enable the Confidential Computing service on this VM
instance.
Container
Deploy a container image to this VM instance. Learn more
Boot disk
New 10 GB standard persistent disk
Image
Ubuntu 18.04 LTS
Identity and API access
Service account
Access scopes
Allow default access
Allow full access to all Cloud APIs
Set access for each API
Firewall
Add tags and firewall rules to allow specific network traffic
from the Internet
Allow HTTP traffic
Allow HTTPS traffic
try with http://ipaddress:8080 you can open jenkins page
Ensure that your are removing https and put only http
Inside the jenkins page
You can that it is asking you password
Go to the ssh gcp console and type
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
You will get the password
Now paste the password
Type next - select install suggested plugins
Once installed
Skip continue as admin
Save and finish
Incase if jenkins asks you username and password
Please give admin as username
And password get it from
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
In order to do any automation inside the jenkins , please
remember 3 most important things
For any tool
(plugin : plugin helps us to connect jenkins to a particular
tool , for example , git plugin helps to connect git tool from
jenkins , it gives jenkins git capabilities)
1) Install plugin → manage jenkins -> manage plugins to to
available plugins
And search for the plugin you are looking for and click
install without restart
Sometimes we need to download manually , we can go
to the advanced tab
And upload the hpi plugin
https://plugins.jenkins.io/git/ ( we will find all plugins
there
Or
https://updates.jenkins-ci.org/download/plugins/
2) Global tool configuration - install the tool for example
we have installed git plugin previously , here we will
select which git version we want to install , it is similar to
setting up the environment variable that we set inside
our path variables
3) Configure system : where we can configure token based
authentication and integrate API
We will see some token authentication for webhook ,
sonar
(r For example : if you want to send email notification
we will use post build notification
Lets begin the hands on exercises :
1) Github jenkins integration Here please extract audio
script and demos from the google drive
Download Winrar
https://www.win-rar.com/fileadmin/winrar-versions/
winrar/winrar-x64-601.exe
Download jenkins material
Rar file
https://drive.google.com/file/d/1-
G6oipBj5bKgJCvLGRjF6fUL74Y0x7_v/view
Zipfile
https://drive.google.com/file/d/
1OoKChZfRfly5rci0RSW5T6rI7xrsK6nO/view?usp=sharing
Download the vlc player
https://www.videolan.org/vlc/download-windows.html
All of you please install the VLC player and you can start
watching my videos for hands on exercises
2) , once you extract
Click on github jenkins integration ppt , if prompted for
password
Click on ready only
or
Google drive for entire batch
https://drive.google.com/file/d/1PS-BnrSQvpGQ-
GqE1A6W8yUWjK8h7ctY/view?usp=drive_web
(in the above google drive please go to jenkins audio script
and demos)
Github link
https://github.com/SrikanthPB/mavenwebhook use this for
github integration with jenkins
Github link
https://github.com/SrikanthPB/mavenwebhook
use this for github integration with jenkins
While installing the github plugin
Anytime if you need to restart jenkins
Use systemctl restart jenkins
Jenkins maven integration
Please take reference of jenkins maven integration ppt
and also watch video if possible
While installing java
You need to give java username password ( oracle account)
Incase if any one forgets Java credentials we can use the
following to reset
http://ipaddress:8080/descriptorByName/
hudson.tools.JDKInstaller/enterCredential ( this is for local
system on windows)
Or
http://ipaddress:8080/descriptorByName/
hudson.tools.JDKInstaller/enterCredential ( this is for AWS or
any linux machine)
Please use mine
Username :
[email protected]Password : Bun878reef945
-------------------------------
Github jenkins integration
Go back and
Take the build
Anytime if you want to go to home page
Click on the guy( jenkins )as below
Github with maven jenkins integration
Go to manage->jenkins -> manage plugins ->
Once the plugin is installed we will get the message
Now , lets go to global tool configuration , under maven
Save it go back to previous page
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 47.010 s
[INFO] Finished at: 2021-09-10T02:51:13Z
[INFO] ------------------------------------------------------------------------
Finished: SUCCESS
automatic builds
So far we did manualbuilds by clicking on build now button
We need to automate the builds from developer machine to
Jenkins ,What he wants to send : Github code
We need to do , Github - Jenkins -Webhook - Automated
integration
Expectation : the moment developer pushes the code to
github repository the build should happen
Can you guys please check if you can see the settings
option ?
If not, what is the reason ?
Ans : The answer is only the owner can see the settings , only
then we can achieve the automated build process with
jenkins
Lets everyone create our own repo and clone my repo
Or you can fork it too ,
I cannot see settings here , because I am not the original repo
for the github repo
Now I will click on fork
Click on create fork , that will clone the other’s repo to your
own repo
After forking you can see the settings now
All of you take my code repo
Create folder called maven inside c drive ( machine)
I.e mkdir maven
Go to the folder open cmd prompt and type
cd maven
git init
git clone https://github.com/SrikanthPB/mavenwebhook
Use cd mavenwebhook ( you need to ensure that your
pom.xml is at the root folder ,so please dont forget to change
directory use cd mavenwehbhook
Now create your own repository in github.com
and Use the following commands to push to the github repo
Sometimes you need to use
git remote remove origin ( because you can see my repo
already there , use git remote show origin to the owner of
the repo
git remote remove origin (now the original owner of the repo
is deleted)
git remote add origin
https://github.com/username/reponame.git ( this command
you can find in the newly created repository)
Lets push our changes to the repo
git push -u origin master
Give username and personal access token
( refer to page no 58) ( to generate token i.e password)
Once everyone create their own repo now they can see the
settings inside the github for the mavenwebhook project
What are next steps ? Lets understand REST API
(webservices /webhook)
Webservices help you to connect two different components
over the internet (REST API - webservice)
github-webhook/
Reason for creating your own repo is that settings button
Isn’t enabled whoever is not the owner of this repo .
So please clone it and push to your own repo
Once you pushed to the repo
Since you have your own repository , you can go to settings
And click webhook
Add a webhook
Basically , we would like to create a webhook which can be
hooked to jenkins
Please ignore the following red fonts , if you already have an
ipaddress
curl http://169.254.169.254/latest/meta-data/public-ipv4
( type this in the command prompt , you will get the public ip )
In my case the public ip is 43.204.102.163
Now
We can add the payload url , something like
localhost:8080/github-webhook/
This will not work it throws you an error
There was an error setting up your hook: Sorry, the URL host
localhost is not supported because it isn't reachable over the
public Internet
Please note for AWS ( if already have the ip address and port
number together the following step is not required . The
following step is noted in red font
The above error means that localhost jenkins cannot be
accessible for github which is on the
Public internet
We need to make localhost:8080 , available to the public
Use ngrok
https://ngrok.com/download
Download for linux
Extract the file ngrok from the zip file once downloaded
Open the terminal from the local where you ngrok extracted
Go to the terminal and type
./ngrok http 8080
It will form url like
http://35.226.201.43 ( use http only )
Now remove localhost:8080 and add
http://35.226.201.43.io/github-webhook/
So your localhost:8080/github-webhook/ will now become
http://35.226.201.43.io/github-webhook/
( here ip address and port number belong to jenkins and
githubwebhook belong to your github ( inside webhook you
can have all the rest api methods which will help us to get
information from github and send to jenkins
We need to use ngrok to integrate jenkins and github
We will basically make the jenkins localhost available
To the public so that , jenkins can be accessed to github
Easi
ly for the integration
After downloading please right click on the zip folder and click
Extract here
From here open a terminal
And run the following command
./ngrok http 8080
After giving the command it says as following
Now copy the url
http://8390a8dbf5b3.ngrok.io ( this is nothing but your
localhost:8080 , publicly available),
Use this for your settings inside github
Webhook url , it should be like
http://8390a8dbf5b3.ngrok.io /github-webhook/
Continuation after giving the webhook url inside settings
github
Now we will go to the jenkins
Give the same git repo that you have created the webhook
Note: you should the owner of the repo ( please give your
repo not mine)
Also go to build triggers and give following configuration
Manage jenkins -> Configure system : -> GitHub
Click advanced
Click on specify another url for github configuration and paste
the url and select
re-register hook for all jobs and save / apply
Please be mindful that , your page may scroll down
automatically when you select specify another url for github
configuration , scroll up to the same section
We have successfully integrated github with jenkins
Now you have to enable your project to use the same feature
Come to freestyle project that you created earlier ( make sure
you are using the github url)
go to build triggers
Select GitHub hook trigger for GITScm polling and apply
Now go back to the github repo where you cloned my repo
and started using
Go the pom.xml , make some changes in the github
Commit it and give a comment
Now you should see the build is taking place automatically :)
PMD checkstyle
https://github.com/SrikanthPB/checkstyle.git
Checkstyle is used to validate the coding standards defined in
the checkstyle.xml
Here we defined we should not encounter any null values in
the coding , thus we have successfully avoided a future
exception or error which checkstyle was able to figure it out
During the static code analysis
<?xml version="1.0" ?>
<!DOCTYPE module PUBLIC
"-//Puppy Crawl//DTD Check Configuration 1.2//EN"
"http://www.puppycrawl.com/dtds/configuration_1_2.dtd"
>
<module name="Checker">
<module name="TreeWalker">
<module name="ReturnNullInsteadOfBoolean"/>
</module>
</module>
Create a freestyle project and give the git repository as
https://github.com/SrikanthPB/checkstyle.git
Now go the build option and give the following command
i.e
Checkstyle:check is the command inside top level maven
targets for us to scan and get the results displayed
Build notifications :
Sometimes we need to notify our team members about the
build status or build failures
Install the Email Notification plugin
Rest is available in the IRC / Email notification ppt
In jenkins - go to manage jenkins -> configure system ->
E-mail Notification ( please do not select editable email
notification and extended email notification)
Folow the ppt and the video as well
Click on advanced in the above
smtp.gmail.com - smtp server
Port no -465
( please do not use any email which is linked to
Internet banking or any confidential information , better
create a dummy gmail account
And start using it
You need to enable less secure apps on
https://myaccount.google.com/lesssecureapps ( please
switch this on )
You need to disable two factor authentication
https://myaccount.google.com/signinoptions/two-step-
verification/enroll-welcome
https://accounts.google.com/DisplayUnlockCaptcha ( use
incognito window
And login with username and password ( use the same that
you used inside jenkins
Configure system)
for two factor notifications: please go to manage your google
account
go to privacy and personalization
in security section you can see 2 step verification/. I turned it
off
Sample successful screenshot
slack notifications
Please note that
In order to add jenkins inside the slack
Follow the screenshot given below
Inside the jenkins job
Build pipeline
Instead of looking into logs for many goals we can create a
build pipeline to visualize
let s achieve first by creating four different jenkins jobs
Now we have four different projects for four different goals
Now lets build a pipeline
First we need to install the plugin , manage jenkins - manage
plugins -> go to available and search for build pipeline
And install the plugin
Now we can create a pipeline
Give a name
Now we need to select an initial job
Obviously out of all goals i.e clean-compile-install-package
Clean is the first goal
Now click ok button after selecting the initial job
We need to create dependency between the four individual
jobs
By selecting an option “ build after other projects are build”
Because clean is your initial project selected from the
pipeline ,
We need not do any configuration inside clean ,
Lets start from compile , go inside compile job
compile_maven
Now , we have successfully created a dependency between
Clean and compile . now lets integration compile and install
Now , lets get into
Install-maven
Now , clean -> compile -> install are integrated in a pipeline ,
lets
Integrate package as well
Package-maven
Now , successfully clean -> compile->install->package is under
a build pipeline.
Go to the clean-compile-install-package pipeline
And trigger a build
And please select the following option, so that we can see
All successful builds inside configure in the build pipeline
So that we can see more builds that have been taken care
By default it is 1 , we will change to 10
SonarQube
It is a code review tool , which can analyse the code defects
and vulnerabilities
Lets use AWS free tier Windows machine to make up our
sonar up and running
Once pem file is downloaded
Launch your instance and run the instance
Select RDP
Download the remote desktop file
Get password
We need to upload the pem file
Decrypt the password
Download the Windows Desktop file ( we did it earlier)
Now connect using windows desktop file
Now you can give the decrypted password
And start using the Windows
Please download the google drive to find the softwares for
JAva and Sonarqube , we have winrar also to extract the
google drive
Google drive for entire batch
https://drive.google.com/file/d/1PS-BnrSQvpGQ-
GqE1A6W8yUWjK8h7ctY/view?usp=drive_web
We can 7 zip also to unzip
https://www.7-zip.org/download.html
In order to start the sonarqube
Go to the location , Wherever you downloaded google drive
and find for sonarqube inside the google drive
Here we have 3 folders , Please delete old folders and extract
them fresh from the rar files again ( to ensure no conflicts
with previous
run)
Sonarqube-6.4 - SonarQube server
Sonar-scanner-3.0.3.778-windows - Scanner for scanning the
projects
Sonar-scanning-examples-master - Sonar qube projects
where we have the source code
First we need to go inside
Sonarqube-6.4
C:\sonar\sonarqube-6.4\sonarqube-6.4\bin\windows-x86-
64
And type cmd in the address bar ,
And type : StartSonar.bat
This will bring sonarqube server up
If server fails uninstall older java version , please install jdk
1.8_144 from the google drive which I shared below
Google drive for entire batch
https://drive.google.com/file/d/1PS-BnrSQvpGQ-
GqE1A6W8yUWjK8h7ctY/view?usp=drive_web
jvm 1 | 2020.12.18 09:34:08 INFO app[]
[o.s.a.SchedulerImpl] Process[es] is up
( This means your sonarqube server started properly)
What should we do next ?
We should be able to scan our projects using our sonar-
scanner
Lets copy the location of the sonar-scanner and come back to
your sonar projects
Like c:\sonar\sonar-scanner-cli-3.0.3.778-windows\sonar-
scanner-3.0.3.778-windows\bin
And come to the directory
Where you have the different examples ,
\sonar\sonar-scanning-examples-master\sonar-scanning-
examples-master\sonarqube-scanner
Here from this directory type cmd
We will run our scanner from this project
C:\sonar\sonar-scanner-3.0.3.778-windows\bin\sonar-
scanner.bat
After copying the above location , go back to the place where
you have the projects
I.e
C:\sonar\sonar-scanning-examples-master\sonar-scanning-
examples-master\sonarqube-scanner
|||
Type cmd here |||
Screenshot
Please note the above location should contain my properties
file
C:\sonar-scanner-3.0.3.778-windows\bin\sonar-scanner.bat
It will open 9000 port ,
Type localhost:9000 in the browser
rules combined to create a profile
profiles created will become quality gate
Login into sonarqube
Username admin
Password admin
Sonar Jenkins
Code review > Continous code review
token> install sonar scanner plugin
global tool configuration -> Jdk ->
/usr/lib/jvm/openjdk11
configure system -> sonar -> url of sonar
token ->
Project
github -> github url of sonar-> sonar project
bulid step
sonar scanner
java ->
properties
sonar.projectKey=org.sonarqube:sonarqube-scanner
sonar.projectName=Example of SonarQube Scanner Usage
sonar.projectVersion=1.0
( If you face issues for downloading sonarqube plugin ,
please use the following link)
https://updates.jenkins.io/download/plugins/sonar/2.13/
sonar.hpi
In order to integrate the Jenkins and sonar please follow the
sonar-jenkins integration
Ppt and video inside the jenkins audio script and demos from
google drive
Use the following github for sonarqube project
https://github.com/SrikanthPB/sonars.git
Suppose if you are using sonar on windows AWS
Please ensure you are opening firewall for 9000 port inside
Windows
Once you add the port number inside windows firewall
Add security inbound rules for AWS instance also
Otherwise you cannot access the sonarqube outside
Step 1 : Manage jenkins -> Manage plugins -> sonar scanner
Install the plugin without restart
Step2 : Manage jenkins-> global tool configuration ->
Sonar scanner ( Dont select sonar scanner for MS build)
Give a name and select latest version
Step 3 👍
Manage jenkins -> configure system ->
Save the page now we need to given the authentication
token
Now for server authentication token
Go to your sonarqube Click my account on extreme right
and select Security give a name for the
token as jenkins and generate the token , copy the token
now go to manage jenkins
Go to managejenkins-> manage credentials
Select secret text from the dropdown and save it
Since we have created the token
Now go to manage jenkins -> configure system
Select SonarQube installations
Select the secret text , now you are ready
Create a freestyle project
Give the github url
https://github.com/SrikanthPB/sonars.git
From the build steps : choose execute sonarqube scanner
( never choose sonar MS build)
Analysis properties
Give the following properties
sonar.projectKey=org.sonarqube:sonarqube-scanner
sonar.projectName=Example of SonarQube Scanner Usage
sonar.projectVersion=1.0
https://github.com/SrikanthPB/sonars.git
Ensure you have jdk installed for your jenkins
Manage jenkins -> global tool configuration
name : java
Please use mine
Username :
[email protected]Password : Bun878reef945
Incase this method is not working
Lets use a manual approach
Connect to the ubuntu instance , install the jdk manually
( which is already done earlier before installing jenkins , now
Lets go the global tool configuration .
Now lets give the above path manually
I.e
/usr/lib/jvm/java-11-openjdk-amd64
Now you can run the project
Now in the project
Use ngrok to create a public url from sonar local 9000 port
Download ngrok
https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-
windows-amd64.zip
Extract and go to the folder and run
Ngrok http 9000
Copy the http url generated something like
http://82128c2135ad.ngrok.io (example) ( use this inside
jenkins to integrate sonar and jenkins)
Suppose you are using Windows from AWS you can directly
Copy the ipaddress
Please note adding the token for jenkins you can do from the
Manage jenkins -> credential manager
Select global and add credentials
For java oracle credentials you can update by using following
link
http://localhost:8080/descriptorByName/
hudson.tools.JDKInstaller/enterCredential
Build pipeline
We will create a build pipeline for more GUI representation
and for convenience sake
Example we have four individual goals
clean compile install package inside one project ( its difficult
to debug )
That is when we can use build pipeline for having a GUI way
of debugging the things
For example as follows
In order to achieve the above
Go to manage plugins -> go to available section -> search for
build pipeline
Install the build pipeline ( install without restart)
If the above fails then we need to upload the hpi plugin
manually
https://updates.jenkins-ci.org/download/plugins/build-
pipeline-plugin/1.5.8/build-pipeline-plugin.hpi
Download form the above and go to the
Once it is installed ,
Now lets create four different free style projects with
individual goals
Like
Maven_clean
Maven_compile
Maven_install
Maven_package
All the above jenkins freestyle jobs will be using
https://github.com/SrikanthPB/mavenwebhook
And each of them will their respective goals from the top
level maven targets
We can create a build pipeline now ,
Once we got the build pipeline , Please select the initial job
that we created earlier
( i.e clean_maven)
Now we can see only initial job that we selected will run for
the build pipeline ( all goals pipeline that we created )
The following tasks are required
1)Clean and compile should be connected
2)Compile and install should be connected
3) Install and package should be connected
1) For clean and compile to be connected , we need to go
inside the compile job
Configure -> build triggers -> build after other projects are
built and select clean maven ( which is the previous goal for
compile)
Now lets go
2) For compile and install to be connected , we need to go
inside the install job
Configure -> build triggers -> build after other projects
are built ( select compile
Which is the previous goal for install job)
Now lets go to
3) For install and package to be connected , we need to
inside the package job
Configure -> build triggers -> build after other projects
are built ( select install
Which is the previous goal for package job)
Now everything is connected as below
Now we have a beautifully created build pipeline , which can
show the status of individual goals ( in this case everything
went well )
Please note that we can increase the no of displayed builds
I have deliberately gave invalid goal inside the package job
to fail it
Now we have the following display
Distributed builds for jenkins
Inside aws :
Passwd (set a password ) , give the username and password
inside the jenkins
We need to go to
vi /etc/ssh/sshd_config
#Port22
Remove hash to enable port 22 for communication between
jenkins master and slave
Password authentication yes
Permit root login yes
Save it
systemctl restart sshd
Now , lets go inside the jenkins
Manage nodes and clouds
Please remember since we enabled password authentication
We need to generate a password ( using passwd command)
( please remember the password)
Lets define a location to the jenkins
Credentials : to add credentials here , click add jenkins
Give username and password
Select it and Save it
Now your agent is successfully connected and running online
You can verify like
( note : in the above screenshot there is no error beside
jenkins-slave that means it is working fine )
Incase if there is an error , please verify sshd_config
Vi /etc/ssh/sshd_config
Here , following should be added properly
systemctl restart sshd
Ensure your slave has the latest java version
yum install java-11-openjdk-devel
We can also tools to our jenkins slave ( global tool config )
Please remember to install git on this jenkins slave ( yum
install git or apt-get install git (based on your os give the
relevant command) and give the location of the git i.e
usr/bin/git in this case , for maven it will be
usr/share/maven
In order to make use of this agent
Go to your freestyle job , selecting the general options which
will say restrict the build where it has to run
following and give the
Slave name
Selenium
We use automation testing in order to avoid manual testing .
Selenium provides automated testing
Lets first use selenium IDE
https://chrome.google.com/webstore/detail/selenium-
ide/mooikfkahbdckldjjndioackbalphokd?hl=en
The above is the link for chrome , click on it and add to your
chrome extensions
Click on your top right extreme right of your chrome browser
and record a new test in a new project
Give the website url of the project
After giving the url , click start recording
The moment when we click start recording , it will open the
website which we wanted to test ,
Here please continue the testing what you wanted to do , all
the scenarios that you are testing while it is being recorded
will be stored, so that next time we just play the recorded
Scenarios. Once done click stop button ( red colour square )
Once stopped, give the test case name
Once the test case is saved , We can play whenever we want
to test the same scenario
Selennium Webdriver
It is used to test custom functionality of the code , we can
write our own scripts for testing
To continue with this , we need to download eclipse
https://www.eclipse.org/downloads/
Click download again
Now it will get downloaded
Once downloaded , click on it and open , install eclipse
Select eclipse ide for Java developers
Then select install
Select accept now for license agreement
Once installation is done , click on launch button
OR
Double click on eclipse desktop icon and click launch
Click launch again ( this is for selecting the workspace)
Now eclipse will start
Once you see welcome windows ( close it )
Go to package explorer and right click new Java project
create a new java project and create a new class called Sele
and paste the following code .
Click next - Finish
Select Src as above and right click select class
And give the name as Sele
-------------------------------
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.edge.EdgeDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
//comment the above line and uncomment below line to use
Chrome
//import org.openqa.selenium.chrome.ChromeDriver;
public class Sele {
public static void main(String[] args) {
// declaration and instantiation of objects/variables
System.setProperty("webdriver.chrome.driver","//home//tr
oubleshooting//Downloads//chromedriver");
WebDriver driver = new ChromeDriver();
//comment the above 2 lines and uncomment
below 2 lines to use Chrome
//System.setProperty("webdriver.chrome.driver","G:\\
chromedriver.exe");
//WebDriver driver = new ChromeDriver();
String baseUrl =
"http://demo.guru99.com/test/newtours/";
String expectedTitle = "Welco Mercury Tours";
String actualTitle = "";
// launch Fire fox and direct it to the Base URL
driver.get(baseUrl);
// get the actual value of the title
actualTitle = driver.getTitle();
/*
* compare the actual title of the page with the expected
one and print
* the result as "Passed" or "Failed"
*/
if (actualTitle.contentEquals(expectedTitle)){
System.out.println("Test Passed!");
} else {
System.out.println("Test Failed");
}
//close Fire fox
driver.close();
-------------------------------------------
After pasting entirely , type ctrl + S to save
Now download the java libraries
https://selenium-release.storage.googleapis.com/3.13/
selenium-java-3.13.0.zip
Extract it ,
Go to the eclipse right click on your project -> properties
Java build path -> configure build path click on libraries tab
and click add external jars and add all the jar files and click
apply and close
Including inside the libs folder and also
Client-combined-3.13.0
client-combined-3.13.0-sources
The following are the jar files
Please note that we need drivers for chrome and java
libraries as well .
https://chromedriver.storage.googleapis.com/index.html?
path=90.0.4430.24/
https://chromedriver.storage.googleapis.com/
90.0.4430.24/chromedriver_win32.zip
( use this for windows)
Download chromedriver_linux64.zip
(use this for linux)
Download the chrome webdriver and place inside
//home//yourusername//Downloads//chromedriver
||
Replace username with yours . example smitha or
harish
Once you set the system path inside the program like
System.setProperty("webdriver.chrome.driver","//home//
username//Downloads//chromedriver");
If you get compilation issues .
Please follow the steps
Follow below steps:
1. Select the Java project(created one)
2. Build Path
3. Configure build path
4. Java compiler
5. Change the compiler compliance level(I selected 1.8)
6. Apply and Close.
Selenium
Use the following version for the selenium
Open terminal and give the following command to get
chrome latest version
apt-get install google-chrome-stable
This will upgrade to 90 version
Now run the program right click on the program Run as Java
application
Now your program will open a new browser and test and
will print the result
On your eclipse console
Similarly we can do it for firefox browser as well .
But for firefox , get the firefox driver
https://github.com/mozilla/geckodriver/releases
System.setProperty("webdriver.gecko.driver","//home//
troubleshooting//Downloads//geckodriver");
Please note that the driver name is changed , red color
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.edge.EdgeDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
//comment the above line and uncomment below line to use
Chrome
//import org.openqa.selenium.chrome.ChromeDriver;
public class Sele {
public static void main(String[] args) {
// declaration and instantiation of objects/variables
System.setProperty("webdriver.gecko.driver","//home//tro
ubleshooting//Downloads//geckodriver");
WebDriver driver = new FirefoxDriver();
//comment the above 2 lines and uncomment
below 2 lines to use Chrome
//System.setProperty("webdriver.chrome.driver","G:\\
chromedriver.exe");
//WebDriver driver = new ChromeDriver();
String baseUrl = "https://www.google.com/";
String expectedTitle = "Google";
String actualTitle = "";
// launch Fire fox and direct it to the Base URL
driver.get(baseUrl);
// get the actual value of the title
actualTitle = driver.getTitle();
/*
* compare the actual title of the page with the expected
one and print
* the result as "Passed" or "Failed"
*/
if (actualTitle.contentEquals(expectedTitle)){
System.out.println("Test Passed!");
} else {
System.out.println("Test Failed");
}
//close Fire fox
driver.close();
}
}
Jenkins and selenium integration
Go to eclipse project , right click and click export
Type runnable jar , select it and
Click next launch configuration
Select launch config as Sele ( select from drop down)
Export destination put it in your download folder and select
finish
We can upload the Sele.jar file inside the jenkins (node)
using FTP server
https://filezilla-project.org/download.php?type=client
After installing file zilla
Open the file zilla
host is the public ip address of your machine ,
The above screenshot has the left side local machine , right
side
Linux machine , please drag and drop the sele.jar
Username is root
Password needs to be set ( give passwd and give command
in your linux terminal)
Now you will have the jar file extracted to the downloads
folder
Go to jenkins dashboard
Create a new free style project name it as - sele-jenkins
integration
Go to the build and select execute shell
Give the command as
java -jar /home/username/Downloads/Sele.jar
( please give chmod permissions to the above folders and
the file Sele.jar)
Chmod 777 /home
Chmod 777 /home/username
Chmod 777/home/username/Downloads
Chmod 777 Sele.jar
Sele.jar
Google drive link
Please ensure you keep your Sele.jar in downloads folder
And also your chrome driver / firefox driver in Downloads
folder
https://drive.google.com/file/d/1qTh2I-
VyShwyFNmBgdhxb655IJG35xwh/view?usp=drive_web
The above link has the Sele.jar
Firefox gecko driver for linux
Change the system property to webdriver.gecko.driver and
location of the geckodriver
https://github-releases.githubusercontent.com/
25354393/4bd0ab80-991d-11eb-942c-72658f3553a7?X-Amz-
Algorithm=AWS4-HMAC-SHA256&X-Amz-
Credential=AKIAIWNJYAX4CSVEH53A%2F20210502%2Fus-
east-1%2Fs3%2Faws4_request&X-Amz-
Date=20210502T141437Z&X-Amz-Expires=300&X-Amz-
Signature=6a113a0195745bfdc24c1fa7f251e09e5cb1d958de
e6d86b84087da9bde96459&X-Amz-
SignedHeaders=host&actor_id=0&key_id=0&repo_id=25354
393&response-content-disposition=attachment%3B
%20filename%3Dgeckodriver-v0.29.1-
linux64.tar.gz&response-content-type=application%2Foctet-
stream
Backup for jenkins
First lets install the plugin
Once done go to the manage jenkins
Click on thinbackup
Now lets give the settings
I have given the location of the backup directory
Once we gave the backup directory we can start backing up
the files
Now we can restore from the previous backup/save points
whenever there is a crash
Build stats :
Let's look at the global build stats plugin
Once installed we can see the global build stats
We can give our requirements for the new chart
Now click create new chart at the bottom
Finally we have a chart configuration like following
Docker
-------
Its a lightweight container , where we can run our
applications and manage them effectively
One advantage of using docker is to reduce the O/S space
and use a container
Which takes less space
One more advantage of docker is that we can use to
exchange our entire software
( i.e along with your code , software , env, configuration)
For linux based , installation is pretty s
To install docker
sudo su
apt --fix-broken install ( use this if any issues )
apt-get update -y
apt-get install docker.io
On ubuntu
Docker run -it centos ( this will take inside the centos with a
new container
Created everytime .
To exit gracefully , (type ctrl +shift +p+q)
Lets see a sample application , For example we need to use
tomcat
Run the docker command
systemctl stop jenkins ( to avoid conflict of both jenkins , we
will
Stop this jenkins first)
docker run -it -p 9000:9000 sonarqube--> image name
| | -----------------------------> -p
containerport:applicationport
run interactive terminal- to get inside the
sonarqube direclty
the image
Container : it is the os which helps the image run container-
minimalistic o/s ( core linux dist which helps to run our
software)
Image : image is the actually application : example : jenkins
docker run -it -p 8080:8080 jenkins/jenkins
( please note this jenkins in coming from the docker)
If you want to search an image , type docker search
imagename
Example : docker search jenkins
Based on the response you can get , you can choose any
image
- I -interactive
- T - terminal
- D - detached mode
For example docker run -itd ( detached mode will not start
your sonarqube instead it will only create)
To check list of all running containers
docker ps
docker images
To run docker with images
Docker run -it imagename
Docker run -it centos
To work with applications which have port numbers
docker run -it -p 80:80 httpd
80:80 : one is container port one one more is application
port
Go to the browser , give the ip address:80 , it should display
it works
Please note , give http://ipaddress:80 ( https will not work)
Docker run -itd centos ( this will run in detached mode )
Docker ps -a ( is for all containers , including stopped
containers)
Docker ps ( for only running containers
To remove all containers ,
Use
docker rm $(docker ps -a -q)
docker rm $(docker ps -a -q) --force ( if you want to force
delete you can delete now)
docker rmi $(docker images -q)
docker rmi $(docker images -q) --force
Path of images and containers
/var/lib/docker
Images
Containers
Volumes
Everything will be stored here in this directory
Exercise
Type docker run -it centos
Come out by pressing ctrl +p+q simultaneously
When you come out
Type docker images
Check the size of the centos image
Its 209 mb
Now you can go inside the centos image again
Docker run -it centos
Here type yum install java
Once installed we need to commit , tag and push to the repo
Docker commit container id and image name
Example : Docker commit 032212 centos
Docker tag imageid username/newtagname
Docker push username/imagename
Example :Docker push srikss/centos
If you face access denied issues
Type docker login give
Username and password
DOCKER COMPOSE
Create
vi docker-compose.yml
( please install docker-compose , apt-get install docker-
compose
Before copying the following , always validate online
Copy paste the code from the following into
https://yamlvalidator.com/
You can see a message saying , its a valid yaml
------------------------
version: '2'
services:
db:
image: mysql:5.7
container_name: db
environment:
MYSQL_ROOT_PASSWORD: my_secret_password
MYSQL_DATABASE: app_db
MYSQL_USER: db_user
MYSQL_PASSWORD: db_user_pass
ports:
- "6033:3306"
volumes:
- dbdata:/var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: pma
links:
- db
environment:
PMA_HOST: db
PMA_PORT: 3306
PMA_ARBITRARY: 1
restart: always
ports:
- 8081:80
volumes:
dbdata:
----------------------
Use the command
docker-compose up -d
This will run and install as per the docker compose yml
Now open , ipaddress:8081 ( this will open phpmyadmin
page) username:db_user,pass:db_user_pass , server give
blank
Use the above credentials to login into the page
Docker customized image
FROM node:12-alpine
RUN apk add --no-cache python2 g++ make
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000
Save the above file as dockerfile
Now to build
Type
docker build .
—-------------------------
CI - CD
Interview perspective *** (very important)
For any automation of the the CI-CD pipelines please
remember the following as useful scenarios ( most of the
pipeline we use groovy script only )
Java pipeline
github -> maven install -> maven test -> docker build ->
docker deploy
dotnet pipeline
github -> msbuild install -> msbuild test -> docker build ->
docker deploy
python pipeline
github -> python build tool install -> buld tool test->
dockerbuild->docker deploy
node js pipeline
github -> npm install -> npm test-> dockerbuild->docker
deploy
angular pipeline
github -> npm install ->npm test-> dockerbuild->docker
deploy
Solutions for the above
Java
Jenkinsfile (Declarative Pipeline)
/* Requires the Docker Pipeline plugin */
pipeline {
agent { docker { image 'maven:3.8.6-openjdk-11-slim' } }
stages {
stage('build') {
steps {
sh 'mvn --version'
}
}
}
}
Toggle Scripted Pipeline (Advanced)
Node.js / JavaScript
Jenkinsfile (Declarative Pipeline)
/* Requires the Docker Pipeline plugin */
pipeline {
agent { docker { image 'node:16.17.1-alpine' } }
stages {
stage('build') {
steps {
sh 'node --version'
}
}
}
}
Toggle Scripted Pipeline (Advanced)
Ruby
Jenkinsfile (Declarative Pipeline)
/* Requires the Docker Pipeline plugin */
pipeline {
agent { docker { image 'ruby:3.1.2-alpine' } }
stages {
stage('build') {
steps {
sh 'ruby --version'
}
}
}
}
Toggle Scripted Pipeline (Advanced)
Python
Jenkinsfile (Declarative Pipeline)
/* Requires the Docker Pipeline plugin */
pipeline {
agent { docker { image 'python:3.10.7-alpine' } }
stages {
stage('build') {
steps {
sh 'python --version'
}
}
}
}
Toggle Scripted Pipeline (Advanced)
PHP
Jenkinsfile (Declarative Pipeline)
/* Requires the Docker Pipeline plugin */
pipeline {
agent { docker { image 'php:8.1.11-alpine' } }
stages {
stage('build') {
steps {
sh 'php --version'
}
}
}
}
Toggle Scripted Pipeline (Advanced)
Go
Jenkinsfile (Declarative Pipeline)
/* Requires the Docker Pipeline plugin */
pipeline {
agent { docker { image 'golang:1.19.1-alpine' } }
stages {
stage('build') {
steps {
sh 'go version'
}
}
}
}
--------------------------
Project on jenkins docker pipeline - CI/CD - Continuous
integration / continuous deployment
DESCRIPTION
Demonstrate the continuous integration and delivery by
building a Docker Jenkins Pipeline.
Problem Statement Scenario:
You are a DevOps consultant in AchiStar Technologies. The
company decided to implement DevOps to develop and
deliver their products. Since it is an Agile organization, it
follows Scrum methodology to develop the projects
incrementally. You are working with multiple DevOps
Engineers to build a Docker Jenkins Pipeline. During the
sprint planning, you agreed to take the lead on this project
and plan on the requirements, system configurations, and
track the efficiency. The tasks you are responsible for:
● Availability of the application and its versions in the
GitHub
○ Track their versions every time a code is
committed to the repository
● Create a Docker Jenkins Pipeline that will create a
Docker image from the Dockerfile and host it on Docker
Hub
● It should also pull the Docker image and run it as a
Docker container
● Build the Docker Jenkins Pipeline to demonstrate the
continuous integration and continuous delivery
workflow
Company goal is to deliver the product frequently to the
production with high-end quality.
You must use the following tools:
● Docker: To build the application from a Dockerfile and
push it to Docker Hub
● Docker Hub: To store the Docker image
● GitHub: To store the application code and track its
revisions
● Git: To connect and push files from the local system to
GitHub
● Linux (Ubuntu): As a base operating system to start and
execute the project
● Jenkins: To automate the deployment process during
continuous integration
Following requirements should be met:
● Document the step-by-step process from the initial
installation to the final stage
● Track the versions of the code in the GitHub repository
● Availability of the application in the Docker Hub
● Track the build status of Jenkins for every increment of
the project
Jenkins pipeline
Start to end of delivery :
We will use jenkins pipeline
Execution :
install docker on ubuntu lab
apt-get install docker.io
create docker hub ac
https://hub.docker.com/
docker
Type docker login ( in the terminal)
Give the username and password when prompted
( now give the permission so that jenkins can have access to
the docker . since its an integration between jenkins and
docker )
sudo chmod 777 /var/run/docker.sock
go to jenkins
manage plugins - available - “Docker Pipeline” in the
available section
install without restart
Manage jenkins -> manage credentials
Click on global inside Stores scoped to Jenkins
jenkins global credentials
add docker hub credentials as following
Now click add credentials
Now select
Kind from the dropdown as username and password
Give your username not mine :)
username srikss ( replace with yours )
password :covid2019 ( replace with yours)
id : dockerhub ( same for everyone)
manage plugins - available - “NodeJS Plugin” install without
restart ( sometimes install without will not , use reboot after
installation to ensure the nodejs plugin is installed perfectly
global tool configuration - nodejs installation . variable name
nodejs
name : node
select 10.0.0 version
Go to jenkins home page ->
Click new item -> pipeline -> give name as docker-jenkins
In the newly created project ( docker-jenkins )
Come to the end
Select pipeline from the list
Note : from the below registry please change from my
username to your username of docker hub
pipeline {
environment {
registry = 'srikss/ubuntu'
registryCredential = 'dockerhub'
dockerImage = ''
}
agent any
tools {nodejs 'node' }
stages {
stage('Cloning Git') {
steps {
git 'https://github.com/SrikanthPB/pipelinescript.git'
}
}
stage('Build') {
steps {
sh 'npm install'
sh 'npm run bowerInstall'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
stage('Building image') {
steps{
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Deploy Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
dockerImage.push()
}
}
}
}
}
}
If you face any issues please use the following commands to run the
deployment successfully
docker rm $(docker ps -a -q) --force
docker rmi $(docker images -q) --force
apt-get update -y
apt-get upgrade -y
chmod 777 /var/run/docker.sock
Note : please give your username not mine
Username/ubuntu
Distributed builds :
Please check my ppt and the video from jenkins audio and
demos
Node configuration
Ensure your node has
Jdk 11 installed
sudo yum install java-11-openjdk-devel.
vi /etc/ssh/sshd_config
# port 22 -> here remove # which denotes comment
Permit root login yes
Password authentication yes -( please ensure there is no # for the above)
Systemctl restart jenkins
Jenkins tomcat deployment
Install tomcat ( only on ubuntu)
apt-get update -y
sudo apt install tomcat9 tomcat9-admin (We are installing Tomcat
9 In this command)
ss -ltn ( here we are checking the ports open)
sudo ufw allow 8080
( from any to any port 8080 proto tcp ( Create a firewall to open
the firewall)
vi /etc/tomcat9/tomcat-users.xml to open the editor
<user username="tomcatmanager"
password="password" roles="manager-gui"/>
<user username="deployer"
password="password" roles="manager-
script"/>
After adding the above inside <tomcat-users>
And before </tomcat-users>
Restart to take the effect
sudo systemctl restart tomcat9
Install the plugin
Now freestyle project give the following repo
https://github.com/AKSarav/TomcatMavenApp
Select tomcat 9 from Add a container inside Deploy war/ear to a
container
From the post build action select deploy to ear/war
Context path give the name as TomcatMavenApp
And the war/ear as
**/*.war
Next , we will select deploy to a container
Here select credentials as
Username deployer
Password password
Tomcat url is the public ip address of the server where
tomcat is located and 8080 is the port
http://ipaddress:8080
Use
curl http://169.254.169.254/latest/meta-data/public-ipv4
To the public ipaddress
Configuration management
--------------------------------------
There are two types
1) Pull : Example puppet / chef
The software automation is pulled from the master
by the slaves , slaves will poll their master with the set
interval time and get themselves updated
2) Push : Example Ansible / salt stack
The master will push all the information to the slave ( So we
can manage everything from single node )
Pull (puppet) | push
(Ansible)
Software maintenance Software
maintenance becomes easy
installation is
easy and every detail of the
slave can be
known easily
Node should be verified and master itself
displays all summary of nodes
Tedious installation
Process
Ansible
Please follow instructions to create gcp free tier account
from the link
https://k21academy.com/google-cloud/create-google-cloud-
free-tier-account/
Take two GCP instances
Go the the compute engine -> Vm instances -> create
instance select name as
Ansible-server
Select allow http and https at bottom
For boot disk image or O/S select centos 8 version
Screenshot as follows
Or
on AWS
( note : pls follow same settings for both ansible server and
ansible node)
Following are the steps to be followed for successful
installation of ansible
And communication between master and node
Note :
hostnamectl set-hostname server ( we can set the hostname
using this command )
Or
Master Node
1. sudo dnf install
https://dl.fedoraproject.org/pub/epel/epel-release-latest-
8.noarch.rpm
2. yum install epel-release -y
3. yum install ansible -y
sudo su
Note : the above steps are only for master
MASTER/SERVER SLAVE/NODE
1) passwd to create
password
2)vi /etc/ansible/hosts 2) vi /etc/ssh/sshd_config
give the ip address of PasswordAuthentication
yes
PermitRootLogin yes ,
and port 22
your node so that systemctl restart sshd
master can talk to the
node by giving a group names
Example :
[dev-servers]
35.188.64.137
If you want to login with other user.
[dev-servers]
[email protected]3) ssh-keygen ( press enter without typing anything)
4) ssh-copy-id
[email protected](ipaddress of the node)
If fails then
( go to the node , type passwd in the terminal , choose a
password
Now go to sshd_config file
vi /etc/ssh/sshd_config
Ucomment (# means comment )
Port 22
Change password authentication to yes
Permit root login yes
Change from “prohibit-password” to “yes”
Password authentication yes
Close it
systemctl restart sshd ( in order to take effect of the new
changes )
Now master and node should communicate give the
command
Sudo ansible -m ping 'dev-servers' ( from master)
This will be successful
Lets install git on the node using a playbook on master
Open vi git.yml
Type the following
---
- hosts: dev-servers
remote_user: root
tasks:
- name: ensure git is at the latest version
yum: name=git state=latest
ansible-playbook git.yml
Vi maven.yml
---
- hosts: dev-servers
remote_user: root
tasks:
- name: ensure maven is at the latest version
yum: name=maven state=latest
Lets install httpd
Vi httpd.yml
---
- hosts: dev-servers
remote_user: root
tasks:
- name: ensure httpd is at the latest version
yum: name=httpd state=latest
( if you want to see if httpd is installed type systemctl status
httpd, if you dont see it then run the following playbook
Ansible-playbook httpd.yml
( this will install httpd on the node )
Go back to node and check with systemctl status httpd
Now you can see the httpd is installed ( but not running)
Lets restart both services using multiple
---
- hosts: dev-servers
tasks:
- name: Ansible service with_items example
systemd:
name: "{{ item }}"
state: restarted
daemon_reload: yes
with_items:
- 'sshd'
- 'httpd'
Now you can see that httpd is running
Use the command
Systemctl status httpd ( it says active and running now)
------
Ansible
Use the above AWS to create two instances centos 8.2 x86
( to avoid extra billing use from community AMI only)
While login from putty . give 13.233.110.42 in session
And login as root inside the centos command prompt
All the notes are inside latest course materials : C:/softwares
There you can find Ansible folder
Please refer my notes for more verbiage
Server Node
sudo yum passwd (create
install epel- a password and
release remember)
yum install vi
ansible /etc/ssh/sshd_c
onfig
vi enable port 22
/etc/ansible/ho
sts
[dev-servers] enable root
172.24.21.22 password and
( give ip of password
node) authentication
If multiple
nodes , give
ipaddress of the
multiple nodes
ssh-keygen systemctl
restart sshd
ssh-copy-id -i
[email protected]
.42 ( if
permission
failed , go the
node and
enable required
authentication,
please dont use
this ipaddress ,
use your
ipaddress of the
node
Try again the
previous step
This time it will
ask password
that we created
on the node ,
post that key
will be copied
successfully,
ssh-copy-id -i
[email protected]
.42
ansible -m ping
dev-servers
Now the above
will be success
ansible-
playbook
git.yml
Create git.yml
Lets automate the git installation
Create vi git.yml
---
- hosts: webservers
remote_user: root
tasks:
- name: ensure git is at the latest version
yum: name=git state=latest
For ubuntu or debian o/s
- hosts: devops
tasks:
- name: install APT Transport HTTPS
apt:
name:
- git
state: present
Vi Maven.yml
---
- hosts: dev-servers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum: name=httpd state=latest
Lets install httpd and restart using ansible
---
- hosts: dev-servers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum: name=httpd state=latest
Restart httpd using ansible
---
- hosts: dev-servers
tasks:
- name: Ansible httpd example
systemd:
name: httpd
state: restarted
daemon_reload: yes
Vi Nginx.yml
---
- hosts: dev-servers
remote_user: root
tasks:
- name: ensure nginx is at the latest version
yum: name=nginx state=latest
Multiple services
- hosts: loc
tasks:
- name: Ansible service with_items example
systemd:
name: "{{ item }}"
state: restarted
daemon_reload: yes
with_items:
- 'sshd
- 'nginx'
Ansible roles
. Ansible roles are consists of many playbooks, which is
similar to modules in puppet and cook books in chef. We
term the same in ansible as roles.
2. Roles are a way to group multiple tasks together into one
container to do the automation in very effective manner
with clean directory structures.
3. Roles are set of tasks and additional files for a certain role
which allow you to break up the configurations.
4. It can be easily reuse the codes by anyone if the role is
suitable to someone.
5. It can be easily modify and will reduce the syntax errors.
How do we create Ansible Roles?
To create a Ansible roles, use ansible-galaxy command which
has the templates to create it. This will create it under the
default directory /etc/ansible/roles and do the modifications
else we need to create each directories and files manually.
[root@learnitguide ~]# ansible-galaxy init
/etc/ansible/roles/apache --offline
- apache was created successfully
[root@learnitguide ~]#
where, ansible-glaxy is the command to create the roles
using the templates.
init is to initiliaze the role.
apache is the name of role,
offline - create offline mode rather than getting from online
repository.
( use yum install tree , to install the tree software to view it
in a better way)
List out the directory created under /etc/ansible/roles.
● [root@learnitguide ~]# tree /etc/ansible/roles/apache/
/etc/ansible/roles/apache/
|-- README.md
|-- defaults
| `-- main.yml
|-- files
|-- handlers
| `-- main.yml
|-- meta
| `-- main.yml
|-- tasks
| `-- main.yml
|-- templates
|-- tests
| |-- inventory
| `-- test.yml
`-- vars
`-- main.yml
8 directories, 8 files[root@learnitguide ~]#
We have got the clean directory structure with the ansible-
galaxy command. Each directory must contain a main.yml
file, which contains the relevant content.
Directory Structure:
tasks - contains the main list of tasks to be executed by the
role.
handlers - contains handlers, which may be used by this role
or even anywhere outside this role.
defaults - default variables for the role.
vars - other variables for the role. Vars has the higher
priority than defaults.
files - contains files required to transfer or deployed to the
target machines via this role.
templates - contains templates which can be deployed via
this role.
meta - defines some data / information about this role
(author, dependency, versions, examples, etc,.)
Lets take an example to create a role for Apache Web server.
Below is a sample playbook codes to deploy Apache web
server. Lets convert this playbook codes into Ansible roles.
---
- hosts: all
tasks:
- name: Install httpd Package
yum: name=httpd state=latest
- name: Copy httpd configuration file
copy: src=/data/httpd.original
dest=/etc/httpd/conf/httpd.conf
- name: Copy index.html file
copy: src=/data/index.html dest=/var/www/html
notify:
- restart apache
- name: Start and Enable httpd service
service: name=httpd state=restarted enabled=yes
handlers:
- name: restart apache
service: name=httpd state=restarted
First, move on to the Ansible roles directory and start editing
the yml files.
cd /etc/ansible/roles/apache
1. Tasks
Edit main.yml available in the tasks folder to define the tasks
to be executed.
[root@learnitguide apache]# vi tasks/main.yml
---
- name: Install httpd Package
yum: name=httpd state=latest
- name: Copy httpd configuration file
copy: src=/data/httpd.original
dest=/etc/httpd/conf/httpd.conf
- name: Copy index.html file
copy: src=/data/index.html dest=/var/www/html
notify:
- restart apache
- name: Start and Enable httpd service
service: name=httpd state=restarted enabled=yes
Altogether, you can add all your tasks in this file or just break
the codes even more as below using "import_tasks"
statements.
[root@learnitguide apache]# cat tasks/main.yml
---
# tasks file for /etc/ansible/roles/apache/main/
- import_tasks: install.yml
- import_tasks: configure.yml
- import_tasks: service.yml
Lets create install.yml, confgure.yml, service.yml included in
the main.yml with actions in the same directory.
install.yml
[root@learnitguide apache]# cat tasks/install.yml
---
- name: Install httpd Package
yum: name=httpd state=latest
configure.yml
[root@learnitguide apache]# cat tasks/configure.yml
---
- name: Copy httpd configuration file
copy: src=files/httpd.conf dest=/etc/httpd/conf/httpd.conf
- name: Copy index.html file
copy: src=files/index.html dest=/var/www/html
notify:
- restart apache
service.yml
[root@learnitguide apache]# cat tasks/service.yml
---
- name: Start and Enable httpd service
service: name=httpd state=restarted enabled=yes
2. Files
Copy the required files (httpd.conf and index.html) to the
files directory.
[root@learnitguide apache]# ll files/*
-rw-r--r-- 1 root root 11753 Feb 4 10:01 files/httpd.conf
-rw-r--r-- 1 root root 66 Feb 4 10:02 f[root@learnitguide
apache]# cat files/iniles/index.html
dex.html
This is a homepage created by learnitguide.net for ansible
roles.
Edit index.html to
<html>
<body>
<h1> we have used ansible roles to successfully deploy this
files into multiple nodes>/h1>
</body>
</html>
Now we need to get the httpd.conf file ,
Lets install httpd on the server
yum install httpd
Now copy the file from the location of the httpd.conf to the
files folder as follows
cp /etc/httpd/conf/httpd.conf
/etc/ansible/roles/apache/files/
[root@learnitguide apache]#
3. Handlers
Edit handlers main.yml to restart the server when there is a
change. Because we have already defined it in the tasks with
notify option. Use the same name "restart apache" within
the main.yml file as below.
[root@learnitguide apache]# cat handlers/main.yml
---
# handlers file for /etc/ansible/roles/apache
- name: restart apache
service: name=httpd state=restarted
4. Meta
Edit meta main.yml to add the information about the roles
like author, descriptions, license, platforms supported.
[root@learnitguide apache]# cat meta/main.yml
galaxy_info:
author: LearnItGuide.net
description: Apache Webserver Role
company: LearnITGuide.net
# If the issue tracker for your role is not on github,
uncomment the
# next line and provide a value
# issue_tracker_url: http://example.com/issue/tracker
# Some suggested licenses:
# - BSD (default)
# - MIT
# - GPLv2
# - GPLv3
# - Apache
# - CC-BY
license: license (GPLv2, CC-BY, etc)
min_ansible_version: 1.2
# If this a Container Enabled role, provide the minimum
Ansible Container version.
------skipped
List out the created files now,
[root@learnitguide apache]# tree
.
|-- README.md
|-- defaults
| `-- main.yml
|-- files
| |-- httpd.conf
| `-- index.html
|-- handlers
| `-- main.yml
|-- meta
| `-- main.yml
|-- tasks
| |-- configure.yml
| |-- install.yml
| |-- main.yml
| `-- service.yml
|-- templates
|-- tests
| |-- inventory
| `-- test.yml
`-- vars
`-- main.yml
8 directories, 13 files
[root@learnitguide apache]#
We have got all the required files for Apache roles. Lets
apply this role into the ansible playbook "runsetup.yml" as
below to deploy it on the client nodes.
[root@learnitguide apache]# cat /etc/ansible/runsetup.yml
---
- hosts: dev-servers
roles:
- apache
[root@learnitguide apache]#
We have defined this changes should be run only on node2,
you can also use "all" if need. Specify the role name as
"apache", also if you have created multiple roles, you can
use the below format to add it.
- apache
- nfs
- ntp
Lets verify for syntax errors:
[root@learnitguide apache]#
playbook: /etc/ansible/runsetup.yml
Edit: ansible-playbook runsetup.yml --syntax-check
[root@learnitguide apache]#
No errors found. Let move on to deploy the roles.
[root@learnitguide apache]# ansible-playbook
/etc/ansible/runsetup.yml
tree
PLAY [node2]
***************************************************
************************************************
TASK [Gathering Facts]
***************************************************
**************************************
ok: [node2]
TASK [apache : Install httpd Package]
***************************************************
***********************
changed: [node2]
TASK [apache : Copy httpd configuration file]
***************************************************
***************
Changed: [node2]
TASK [apache : Copy index.html file]
***************************************************
************************
changed: [node2]
TASK [apache : Start and Enable httpd service]
***************************************************
**************
changed: [node2]
RUNNING HANDLER [apache : restart apache]
***************************************************
*******************
changed: [node2]
PLAY RECAP
***************************************************
**************************************************
node2 : ok=6 changed=5 unreachable=0
failed=0
That's it, We have successfully deployed the Apache
webserver using Ansible Roles to the client node "node2".
Login into the client node "node2" and verify the following
things.
[root@node2 ~]# rpm -q httpd
httpd-2.4.6-67.el7.centos.6.x86_64
[root@node2 ~]# systemctl status httpd
httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service;
enabled)
Active: active (running) since Sun 2018-02-04 10:23:44 IST;
1min 58s ago
Docs: man:httpd(8)
man:apachectl(8)
Nagios
Installation of nagios in your local machine ( windows ) for
mac please download equivalent
Mac software for virtual box (dmg)
https://download.virtualbox.org/virtualbox/6.1.22/VirtualBox-
6.1.22-144080-OSX.dmg
Please open google drive where you extracted the files
You should find a virtualbox installer . kindly install the virtual
box
Once a virtualbox is installed .
Click on import appliance
Give the path of the nagios image i.e nagiosxi-5.4.8-64.ova
( which is inside the google drive)
Import successfully
After importing you should see the nagios image vm (name)
Select it and right click and click start to start the vm
If any errors please update the virtualbox by clicking on check
for updates
Once upgraded , import again and try , sometimes you may
get failure
Then go to the following location , delete the vm folder and
import again
Once you login into nagios
Username : root
Password :nagiosxi
We can use right ctrl+F to toggle between full screen and
normal screen
cd /usr/local/nagios
z`
cd /libexec
Here we can see many plugins
Type ls|more to see plugins by scrolling down with enter
button
./check_ssh 192.168.1.11
it replies ssh ok
./check_tcp -H 192.168.1.10 -p 80 -w 0.05 -c 0.01
( check ipaddress by command ifconig )
Lets see some objects ( everything is configurable inside
nagios )
Cd /user/local/nagios/etc
Go to the GUI by giving the ip address
Username and password
password:2&Ldq3N7kNCYqSnPX3,s
Define the metrics for example
Why do we require Monitoring ?
What is monitoring
What to be monitored ?
System monitoring
processor
network
memory
hdd
CPU utilization
Application monitoring
Database
application logs
server ( tomcat , jboss)
continous monitoring vs monitoring
9 am 6 pm - Monitor - > before 9am and after 6pm(system
might be down , no one to take)
Nagios -> System ->
plugins ->
24 x7 -> trigger 5 times before generating a alert - email ,
mobile
hdd space 100 gb , 80%
Application monitoring
behaviorial monitoring
identify threats
Java application intellipaat website
logs
error - error ( non working , breaking the application )
info - information ( above debug ,basic stuff , only info)
debug -debugging ( ,detailed info ,not recommended for
production )
kibana -
What is the challenge if you get lot of information in logs
(unnecessarily ) get the relevant- filtering the logs
consuming more time to read -> lets convert into GUI - pie
charts ,gant charts , bar charts
Huge logs for different tools and application inside a project -
log aggregation -Log stash 1
Indexing the huge amounts of information to get the required
info - Elastic search 2
convert the information that we had into charts -Kibana 3
Log aggregation ->
30 applications -> source -> 1 ,2 ,3
https://cloud.elastic.co/login
Lets create an account in this website
Please store username and password in your local system
I am in my home page now , click no thanks i will explore on
my own
Click on the flights ( global flight dashboard)
Download the following
https://github.com/aagea/elk-example
Select upload a file from the home page
Select the log file
Here upload the apache_logs
Click on import
Click view index in discover
Now we can see the charts
----
Kubernetes
GCP
AWS
Ubuntu Server 20.04 LTS (HVM), SSD Volume Type - ami-09e67e426f25ce0d7 (64-
bit x86) / ami-00d1ab6b335f217cf (64-bit Arm)
Important note : Please make sure you have enough CPU
( dual core) CPU And Memory (4 GB RAM)
CPU and Memory -
Please note we can still run the kubernetes on AWS t2.micro
( free tier on AWS with a little tweak
Hack is sudo kubeadm init --ignore-preflight-errors=all
( give this while running the kubeadm init command)
Its an orchestration tool
We use ubuntu O/S here
We will have one kubemaster and kubenode
we need to Install kubernetes on both master and node
on master when we run the kudeadmin command we will get
a token copy that token and run it from any node that you
want to connect to the master
Master slave
install install
k8s k8s
make kubead
master join
running token
kubeadm
it will return
token to
join my node
both master and slave
plane -
kubeadm
kubectl
Kubemaster :
sudo apt-get update && apt-get install -y apt-transport-https
sudo apt install docker.io -y
sudo systemctl start docker
sudo systemctl enable docker
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-
key.gpg | apt-key add
cd /etc/apt/sources.list.d
vi kubernetes.list
And add the following ( which is the debian files for installing
kubernetes)
deb http://apt.kubernetes.io/ kubernetes-xenial main
Exit the file
Now, apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
sudo kubeadm init --ignore-preflight-errors=all
This command will create a token ( please copy it and keep it
handy for you to run it from the slave)
If you face any like
kubeadm init shows kubelet isn't
running or healthy
Please follow the solution as per below
vi /etc/docker/daemon.json
{ "exec-opts": ["native.cgroupdriver=systemd"] }
Then, exit
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
Execute
sudo kubeadm init --ignore-preflight-errors=all
kubeadm join 172.31.15.200:6443 --token
l4piey.budkjct2m7gg4lfp \
--discovery-token-ca-cert-hash
sha256:e3f82272d6c61123dbdc5188654c35378d36a3274cc9
df81a615ec9f76101667 --ignore-preflight-errors=all
( in case if there is a \ in middle please remove it )
(note : please dont use the above , its only for illustration
purpose only )
before you join a node, you need to issue the following
commands (as a root user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
export KUBECONFIG=/etc/kubernetes/admin.conf
sudo kubeadm init --ignore-preflight-errors=all
Kubenode
Joining a node
Before joining the node , we need to install kubernetes
Please follow the steps as per below
sudo apt-get update && apt-get install -y apt-transport-https
sudo apt install docker.io -y
sudo systemctl start docker
sudo systemctl enable docker
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-
key.gpg | apt-key add
cd /etc/apt/sources.list.d
vi kubernetes.list
And add the following ( which is the debian files for installing
kubernetes)
deb http://apt.kubernetes.io/ kubernetes-xenial main
apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
If you face any like
kubeadm init shows kubelet isn't
running or healthy
Please follow the solution as per below
vi /etc/docker/daemon.json
{ "exec-opts": ["native.cgroupdriver=systemd"] }
Then
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
With everything in place, you are ready to join the node to the
master. To do this, go to the node's terminal and issue the
command:
kubeadm join 172.31.62.87:6443 --token
4de5ox.xaghrddb0hwrhzie
--discovery-token-ca-cert-hash
sha256:9013fe9a4b4af749733155530ea3a7511f5901b60
6831ae030754a3d508a4f9b --ignore-preflight-errors=all
(note: please ensure that you are putting everything in a
single line before executing the above command)
Once the node has joined, go back to the master and issue the
command sudo kubectl get nodes to see the node has
successfully joined
If you see status as not ready . please run the following
command
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
And also
From master only please dont do the following command for
slave
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Now
Your status will become to ready state from not ready state
Incase none of the troubleshooting helps , then go for the k8s
Playground
-----------------
https://docs.docker.com/desktop/windows/install/
Assessment
Files
vi test-namespace.json
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "test",
"labels": {
"name": "test"
}
}
}
---
vi deploymentTest.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
replicas: 2
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: sirfragalot/docker-demo:dcus
## Question 1 & 2
Create a deployment file called app using the image sirfragalot/docker-demo:dcus. The
deployment must have two replicas.
Please name the file deploymentTest.yaml.
Create the deployment in a namespace called test
Answer: (Files enclosed)
kubectl apply -f ./test-namespace.json -f ./deploymentTest.yaml -n test
kubectl get pods -n test
kubectl get services
kubectl get pods --all-namespaces
kubectl describe nodes my-node ( name of the node)
kubectl get events --sort-by=.metadata.creationTimestamp
# All images running in a cluster
kubectl get pods -A -o=custom-
columns='DATA:spec.containers[*].image'
kubectl get pods app-64744cd559-d9vmr -n test ( for single pod
status)
kubectl describe pods app-64744cd559-d9vmr -n test
## Question 3
Expose the Deployment so that app's contents can be seen on your local machine. Hint
- sirfragalot/docker-demo:dcus runs on port 8080
Answer:
kubectl port-forward deployment/app 8080:8080 -n test &
## Question 4
Scale the replicas up to 5 and record the action. Show the recorded action and the new
replicas being used
Answer:
kubectl scale --current-replicas=2 --replicas=5 deploy/app -n test --record;kubectl rollout
history deployment.v1.apps/app -n test
## Question 5
Using kubectl, show only the pods IPs and health under the headers IP and HEALTH
Answer:
kubectl get pods -o=custom-columns='IP:status.podIP,HEALTH:status.phase' -n test
https://docs.google.com/document/d/
1n6VE9hEol3kmrbTago5yUbrgHcjmMLWlF-pT1HuSaek/edit (
screenshots for the solution)
Terraform
------------------
Preview attachment K8s-solution-console-output.pdf
K8s-solution-console-output.pdf
124 KB
Competitors of terraform : cloud formation ( cloudformation
cannot be used for other cloud providers except AWS )
This used for IAAC ( infrastructure as a code )
In order to avoid any GUI clicks or actions or events , we will
programmatically register these steps in terms of scripts or
In form of templates which can be run on multiple cloud
systems
Select redhat from the O/S list
Once connected through putty after generating putty gen
sudo yum update -y
You’ll need wget and unzip – if you don’t have them, install
them by entering:
sudo yum install wget unzip -y
Download Terraform from the developer’s website:
go to cd /usr/local/bin
sudo wget
https://releases.hashicorp.com/terraform/0.12.2/terraform_0
.12.2_linux_amd64.zip
sudo unzip ./terraform_0.12.2_linux_amd64.zip
Or use the repo configuration and install terraform
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo
https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
sudo yum -y install terraform
terraform -v
If the above doesn't work
Use
export PATH=$PATH:/usr/local/bin/
And try again by typing terraform ( this should give you
options)
Yay ! , terraform is installed successfully
mkdir sample
cd sample
For the following example we need to create user with access
key and secret key ( this is an authentication process to
authenticate successfully to the given cloud provider , in this
case it is AWS)
Steps to create user along with access key and secret key
Step 1 ) Search for iam from the AWS dashboard
Step 2) Select users from the iam
Select the Users from the IAM (search )
Step 3) From the users -> select add user
Step 4) give an username name and select programmatic
access and click next permissions
Step 5) In permissions select admin group and click next
Click tags and dont do anything
Now
Step 6) Review and create user
If group does not exist , click on create group ,give a group
name
And select policy as AdministratorAccess as following
screenshot
And create a group
Click next from tags and reach to create user
Step 7) Finally we will get the access key and secret key with a
success message
Please stay on the same page .
Lets go back to our centos console
vi main.tf
provider "aws" {
region = "us-east-1"
access_key = "AKIATTDGQIHA5DWILGWR"
secret_key =
"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"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-
amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "ec2" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
tags = {
Name = "created by srini"
}
}
https://docs.aws.amazon.com/AmazonS3/latest/userguide/
finding-canonical-user-id.html
Note : in the above script , please change your name in tags
And give your access key and secret access
Incase if you dont get your access key and secret key
Delete one the two access keys , so that you can generate and
Get the secret key also .
now check using ls -ltrha
we can see .terrafrom folder which has all provider code
Terraform init - to initialize
terraform plan - for checking changes
terraform apply - for applying the changes
Suppose I change the ubuntu version I can change like the
following
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-
amd64-server-20210415"]
}
( Please note we can create a new instance with another
version
Only by changing the ubuntu version as above , I changed
from
20 to 18.)
terraform destroy -destroying
echo yes|terraform destroy (command will execute without
the prompt )
Terraform using files and structures
Terraform Configuration Files and Structure
Let us first understand terraform configuration files before running
Terraform commands.
● main.tf : This file contains code that create or import other AWS
resources.
● vars.tf : This file defines variable types and optionally set the
values.
● output.tf: This file helps in generating of the output of AWS
resources .The output is generated after the terraform apply
command is executed.
● terraform.tfvars: This file contains the actual values of variables
which we created in vars.tf
● provider.tf: This file is very important . You need to provide the
details of providers such as AWS , Oracle or Google etc. so that
terraform can make the communication with the same provider
and then work with resources.
Launch multiple EC2 instances of same type using
count on AWS using Terraform
Now, In this demonstration we will create multiple ec2 instance using
count and for_each parameters in terraform. So Lets create all the
configuration files which are required for creation of EC2 instance on
AWS account using terraform.
● Create a folder in opt directory and name it as terraform-demo
mkdir /opt/terraform-demo
cd /opt/terraform-demo
● Create main.tf file under terraform-demo folder and paste the
content below.
resource "aws_instance" "my-machine" {
count = 4
ami = var.ami
instance_type = var.instance_type
tags = {
Name = "my-machine-${count.index}"
}
}
● Create vars.tf file under terraform-demo folder and paste the
content below
variable "ami" {
type = string
variable
"instance_type" {
type = string
}
● Create terraform.tfvars file under terraform-demo folder and
paste the content below.
ami = "ami-0747bdcabd34c712a"
instance_type = "t2.micro"
● Create output.tf file under terraform-demo folder and paste the
content below.
Note: value depends on resource name and type ( same as that of
main.tf)
output "ec2_machines" {
value = aws_instance.my-machine.*.arn
}
Amazon Resource Names (ARNs)
provider.tf:
provider "aws" {
region = "us-east-1"
access_key = ""
secret_key = ""
}
● Now your files and code are ready for execution . Initialize the
terraform
terraform init
● Terraform initialized successfully ,now its time to run the
terraform plan command.
● Terraform plan is a sort of a blueprint before deployment to
confirm if correct resources are being provisioned or deleted.
terraform plan
● After verification , now its time to actually deploy the code using
apply.
terraform apply
Great Job, terraform commands execution was done successfully.
Now we should have four EC2 instance launched in AWS.
Launch multiple EC2 instances of different type using
for_each on AWS using Terraform
● In previous example we created more than one resource but all
with same attributes such as instance_type
● Note: We use for_each in the terraform when we need to create
more than one resources but with different attributes such as
instance_type for keys etc.
main.tf
resource "aws_instance" "my-machine" {
ami = var.ami
for_each = { # for_each iterates over
each key and values
key1 = "t2.micro" # Instance 1 will have
key1 with t2.micro instance type
key2 = "t2.medium" # Instance 2 will have
key2 with t2.medium instance type
}
instance_type = each.value
key_name = each.key
tags = {
Name = each.value
}
}
Important note:
Please note t2.medium will cost you , kindly refrain from executing
This is just for demonstration purpose only
vars.tf
variable "tag_ec2" {
type = list(string)
default = ["ec21a","ec21b"]
}
variable "ami" { # Creating a Variable for ami
type = string
}
terraform.tfvars
ami = "ami-0742a572c2ce45ebf"
instance_type = "t2.micro"
● Now code is ready for execution , initialize the terraform , run the
plan and then use apply to deploy the code as described above.
terraform init
terraform plan
terraform apply
Conclusion
Terraform is a great open source tool which provides easiest code and
configuration files to work with. Its a best Infra as a code tool to start
with. You should now have an idea to Launch multiple EC2 instances on
AWS using Terraform count and for_each on Amazon webservice
l
Terraform vs ansible
Ansible is used for software installation or upgradation or
configuration or deployment of applications
But terraform can do more than that . terraform can also
create your infrastructure like OS CPU , RAM , HDD everything
, Networking configurations all of them can be stored
Terraform init ( will create a base config )
On top of this we can write our own configuration
the terraform init command is used to initialize a working
directory containing Terraform configuration files. This is the
first command that should be run after writing a new
Terraform configuration or cloning an existing one from
version control. It is safe to run this command multiple times.
the terraform plan command creates an execution plan. By
default, creating a plan consists of:
● Reading the current state of any already-existing remote
objects to make sure that the Terraform state is up-to-
date.
● Comparing the current configuration to the prior state
and noting any differences.
● Proposing a set of change actions that should, if applied,
make the remote objects match the configuration.
then terraform apply command executes the actions
proposed in a Terraform plan.
The most straightforward way to use terraform apply is to run
it without any arguments at all, in which case it will
automatically create a new execution plan (as if you had run
terraform plan) and then prompt you to approve that plan,
before taking the indicated actions.
The terraform destroy command is a convenient way to
destroy all remote objects managed by a particular Terraform
configuration.
Always use terraform test ( to cross verify the changes before
applying )
Installation of terraform
https://www.terraform.io/downloads.html
https://github.com/GoogleCloudPlatform/terraform-google-
examples ( good examples )
https://jaxenter.com/tutorial-aws-terraform-147881.html
Bonus materials
--------------------
Docker material full course
https://drive.google.com/file/d/16ZwRWXFIoxfPHx-
Kdh20wicSdeDVoEB7/view?usp=sharing
https://k21academy.com/google-cloud/create-google-cloud-
free-tier-account/
Google drive for entire batch
https://drive.google.com/file/d/1PS-BnrSQvpGQ-
GqE1A6W8yUWjK8h7ctY/view?usp=drive_web
https://www.win-rar.com/fileadmin/winrar-versions/
winrar/winrar-x64-601.exe (use winrar to extract files from
google drive)
Contact me for any assistance
https://www.linkedin.com/in/srikanth-pb-9090a539/