Increase effeciency of organization to deliver applications,
1.improving delivery
2.automation
3.quality
4.monitoring
5.testing
Before Devops come in the game developer or system administrator just create the
application suppose on vs code after that build and release engineer deploy that
application on server on version control systems like (git-not launched) eg. svn,cvs
centrilized code repository
so that every one can access it after that server adminstrator build the app on server
called app server so multiple people involve in this and becomes complicated/manual
for solving these problem devops emerge.
->Suppose you are building a website chooclate.com so some points are important
which is cover in sdlc method
1.PLANNING AND REQUIREMENTS-> most imp/suppose kit kat is not making profit so
planned to drop kitkat .
2.DEFINING-> build a document that kit kat is not imp.
3.DESIGNING->3.1->HIGH LEVEL DESIGN-> suppose there is a cricket match so many
people come to watch match at that day hld used there.
3.2->LOW LEVEL DESIGN->create a small database and used it for normal things.
DEVOPS ENGINEER
4.BUILDING/CODING-> write code push on git.
5.TESTING-> tester take it from git check on the server this is done quality assurance
team.
6.DEPLOYMENT->handed it to cutomer.
->Suppose you purchased a 5 servers of 100gb and 100core from IBM or HP and you
deploy chooclate.com of 4gb and 4core on that but the problem is remaning 96gb is
going to be waste for solving this problem VIRTUAL MACHINES came into the game.
VIRTUAL MACHINES->1. Purchased a server suppose server1 and give it to team1
install HYPERVISOR(software eg.vmware, xen ) on that server it helps to install virtual
machine on your physical server. After that divide your single server into many virtual
machines called LOGICAL ISOLATION .
1.1 every vm has there own memory, hardware and processor
NOTE-When developer sends a request for service than from aws side they created an
api that is responsible for receiving the service and check wheather that service is
VALID, AUTHORIZED ,AUTHENTICATE after that you get ec2 instance or service.
As a devops enge. you have to write a script that matches the criteria of
AWS(valid,authorized,authenticate) process .so by using AWS CLI ,AWS API ,AWS
CFT,AWC CDK(cloud development kit) OR TERRAFORM write a script for connecting
aws api
Why use Linux os?
1. free 2. secure 3. fast 4.distributions 5. open source
Shell Scripting-we use this to create file or do some useful work bcoz it is linux does not
have ui like windows
there are 5 types of shell->the interface between the user and kernal(connect system
softwares with hardware)
1. bash ->famous and useful for writing commands(shell scripting)
2. ksh
3. csh
4. zsh
Shell Scripting(sh)-> it helps to update day by day activities on linux os.
commands-
pwd-present working directory
cd-change directory (cd folder name)
ls-checking the whole list (ls file name)
touch-for creating a file (touch name.sh)
vi- after creating file and for writing use( vi file name) after that click (i) than start
writing than press (esc and than press shift :) after that (:wq!) for save it.
man- it is used for know the details of the command man(command name)
NOTE- after use vi command then before writing script write this line
#!/bin/bash where #! called shebang bash is type of shell.
why not use zsh? because previously zsh by default transfer to bash but nowdays dash
becomes the dafault shell that's why.
cat- if you just want content of the file without opening it use (cat file name).
sh- execute the saved file (sh filename) or (./ filename) before that you need permission
so use (chmod)- it have 3 uses
1.which user(myself)
2. your group
3. all user
so simply use (chmod 777 file name) after that use (./ file name) and it will execute.
meaning of 7 is (4-read 2-write 1-execute)
history- it will give which commands you used previously
mkdir-for creating a new directory use (mkdir folder name)
ls -ltr- it will exact timestamp of at which time file or folder created
rm-rf-for deleting folder (rm-rf folder/file name)
top-to know about pc health then press ctrl+c to stop it
df-for display the details (df filename)
nproc- for finding the number of cpus
free-provides information about the system's memory usage, including both physical
and virtual memory
So after making directory make a file vi filename then press i after that start writing
script (image 2) we use set -x in above image because it shows in the output that which
command we used
Suppose there is one server having numerous virtual machines some are process
amazon data some facebook some instagram if we want the process ids of running
process then use (ps -ef | grep "company name")
|- this is called pipe command
NOTE-suppose we create a file test then after scripting open it by using (./test)
if u want to print only those number which having 1 then use (./test | grep 1)
NOTE- date | echo "shubham" then shubham will be the stdoutput
NOTE- ps -ef | grep "company name" | awk -F " " '{print $2}' then awk command always
print the 2nd column processes of that company
NOTE- set -e #exit the script in case of script failure.
set -o #exit the script in case of pipe failure
NOTE- CURL command helps to retrieve information from some other module like CURL
PASTE LINK THEN PRESS ENTER
DIFF BTW WGET AND CURL COMMAND- wget download the files where curl just show
the files
NOTE - sudo su - for switching root user su-switch user
NOTE - sudo find / -name file name if you want to find the file from system
HOW TO USE IF ELSE CONDTION?
create file vim ifelse.sh after that start writing
a=10
b=5
if[ $a>$b ]
then
echo "a"
else
echo "b"
fi end the code
HOW TO USE FOR LOOP?
for i in {1.100}; do echo $1; done
NOTE - control C will stop the script
NOTE- CURL (PASTE LINK) | GREP (SOME SPECIFIC THING YOU WANT TO PRINT). |-
This symbol represent pipe
NOTE- Crontab- if you want to set specfic time for something
HOW TO OPEN FILE IN READ ONLY MODE?
vim -r test.txt
NOTE- Traceroute company name help for tracing
NOTE- if you had a lot of log files which is not imp. then simply use logrotate(gzip , zip)
CRONJOB- you have to be submit some task on given(fix) time.
VERSION CONTROL SYSTEM-1.CVS(CENTRALIZED)
2.SVN (CENTALIZED)
3.GIT AND GIT HUB(DISTRIBUTED SYSTEM) MAKE 1000s of copies of original
document also called FORK
..First download git in terminal then use command (git init) for initilization then made a
dir let (mkdir shubham) then open it using (ls ) then edit it by using( vim calculator )after
that add it to git by using (git add calculator) then if u want to know the status use (git
status)
if you want to know the changes made in calculator then use (git diff) after that commit
it by using (git commit -m"first version") if u did something change in shubham then
again open it by (ls) then (add) it into git then check status after that give a name (git
commit -m"second version") after that check activity by using (git log)
..... let suppose there is thousands of git and u want to pick one then (git log)
copy (git id) clear then paste it by using (git reset --hard git id)
#example
## mkdir shubham
### cd shubham
#### ls
##### vim calculator
###### git add calculator
####### git status calculator
######## giot commit -m"first version"
######### git log
..CLONE VS FORK- Clone download the code from github while fork totally copied that
code and you can make it your own
CHECKOUT- used to switch between branches
Git Branching-suppose you are working on company whats app and mark wants to add
up some new features on whats app so until directly added that functions on main
branch you should make n new branch in the existing branch then after added both
branches
EG.1. download git
2.git init
3.mkdir shubham
4. vim calculator.sh
5. do some work then git add calculator.sh
6. git commit -m"first"
7. some another person do some work then open it vim calculator.sh do some work add
it and commit -m"second"
8. suppose organization want to do some work name division but it directly not add it
into main function then first
9. git checkout -b division->it will create a new branch in calculator.sh
10. you now you are in the division branch do whatever work then add and commit like
above
11. now you have to marge the division branch with main branch using git merge,git
rebase,git cherry-pick
12. come to main branch from division branch by using (git checkout master) then
before it copy commit id of division branch
13. use (git cherry-pick paste commit id of division branch)
AWS SERVICES
1.ec2 2. vpc(PRIVATE) 3. ebs(VOLUME) 4.s3(STORAGE) 5.IAM 6.CLOUD
WATCH(MONITORING) 7.LAMDA(SERVERLESS) 8.CLOUD BUILD SERVICES 9. AWS
CONFIGURATION 10. BILLING AND COUNTING 11.AWS KMS(KEEP DATA SECURE )
12.CLOUD TRAIL(TRCKING) 13.AWS EKS 14. FARGATE OR ECS 15.ELK(ELASTIC
SEARCH)
Configuration Management- Suppose there are thousands of servers but it is impossible
to manage that servers with some system admistrators so we use 1.Ansible(most
famous) 2.Puppet 3.chef 4.salt
ANSIBLE- PUSH MODEL 1.INVENTORY(stored ip adress of target ansible files) 2.
DYNAMIC INVENTORY(SUPPOSE IF ANY INSTANCE CREATED ON YOUR AWS THEN IT
WILL DIRECTLY SHOW ON YOUR ANSIBLE) PYTHON/LINUX OR WINDOWS/ YAML
LANGUAGE
PUPPET - PULL MODEL MASTER SLAVE CONCEPT
..Install ansible by using command brew install ansible after that gen a key gen in your
local machine by yourself using (ssh-keygen) ssh secure shell it is securely accessing
from unsecured network.
then a private key and public key both generated after that open that file(ls file name)
and copy public key then use (cat paste address of directory and paste public key)
then open ls ~/.ssh/ after that vim ~/.ssh/authorized_keys and paste public keys there
on aws ec2 you have two keys ansible and target keys then copy ip add of target key
and paste on terminal (ssh paste) then you will open it without password
Ansible adhoc command - for one or 2 commands
playbooks - used in ansible for multiple commands in this all the ip adresses of target
is stored so just use (ansible -i inventory all -m "shell" -a "touch name of file"
how to write a playbook? (vim first-playbook.yml)
---
-name: install nginx
host: all
become:true
tasks:
-name:install nginx
apt:
-name: nginx
state:present
-name:start nginx
service:
name:nginx
state:started
then ansible-playbook -i inventory first-playbook.yml
Terraform-suppose you want aws service and write a shell script but after some time
orzanization wants azure service then you have to again write a shell script for azure so
for overcome this problem terfform comes in the game.
It allows you to define, provision, and manage infrastructure resources in a
human-readable configuration file format, which can be versioned and shared across
teams.
use brew install terraform
In terraform we have to write a code and after that it send api to aws that user wants
aws service.
after that use (terraform plan) before execute it connect your ec2 instance with your
machine using (aws configure) after execute (terraform plan)
press yes then your ec2 will be created directly..use (terraform.tfstate) for tracking
everything in ec2
if u are working in a organization and supoose u are writing a code on terraform then
stores on github supoose after some time some other person from ur organization edit
that then it will create problem for ec2 then simple first create a S3 Bucket(for
permanently storage) and dyanamo db(/for locking purpse)
->JENKINS- jenkins is a automation server that is used to build ,test and deploy software
that process starts once you push your apllication on github
before you have ec2 instance called (jenkins master) and many ec2 instances connect
with it called (nodes) but the problem is it waste alot of space sometimes so now we
will use another technique (using jenkins with docker as agents) because docker
containers are light weighted compare to nodes and also cost is low and efficiency is
high.
Install jenkins/install javajdk/create ec2 instance for jenkins from terminal/then copy
public id adress of jenkins ec2 and paste it on terminal/generate password from
terminal/run jenkins/create docker inside jenkins/then copy url of application github
that you want to deploy,test or build and paste it on docker.
PLUGIN- used for integration purpose intregrate jenkins with git or docker or ec2
Agile - Agile methodology is an approach to project management and software
development that emphasizes flexibility, collaboration, and customer satisfaction.
Instead of working on a project from start to finish in one go, Agile breaks the process
into smaller, manageable parts called “sprints” or “iterations.” Each sprint typically lasts
1-4 weeks and focuses on delivering a working product or feature.
JIRA-Jira is a project management and issue-tracking tool developed by Atlassian. It is
widely used in software development, especially for teams practicing Agile
methodologies like Scrum or Kanban, but it can also be used for general project
management across different industries.
Confluence or Share point- these are knowledge sharing platform a group in company
suppose working on some project so they have to login on these platform so that they
can share anything.
Containers- suppose my organization purchased 1million ec2 instances or virtual
machines from aws but only 900k are in use so 100k are going to waste and this will
give very high loss to company so overcome this problem (Containers) Came.
physical Server(laptop)->virtual machine or ec2 ->docker->containers
just like hypervisior helps to intstall vm on physical server, dockers helps to install
containers on physical server.
Also docker have system dependencies ,application dependencies,and libraries so that
whenever container needs it will use because containers does not have any type of os.
WRITE DOCKER FILE->DOCKER ENGINE->DOCKER IMAGE->SAVE INTO CONTAINER.
(MADE docker image using (BUILDAH))
Docker daemons-> listen api request and manage docker image
Docker registries->stores the docker images.
Dockerhub-> you will share your docker image with ur client.
use (docker build -t name )-> for running docker file on terminal.
use (docker run -p container id:container id -it docker image id)
the problem with docker is it contains a lot of dependencies that is not needed
sometimes so we use (multi stage build) .you can create n number of stages in msb but
one final stage is there called minimalist stage.
and in minimalist stage we use (distroless )instead of os like(ubuntu,macos etc)
because there is a chances of hacking.
Distroless-> refers to a type of Docker image that doesn’t include a full operating system
(like Ubuntu or Alpine). Instead, it only contains the essential libraries and dependencies
required to run a specific application. The term “distroless” means “without a
distribution” (e.g., no Linux distribution like Ubuntu)
By using multi stage build technique size of image is 100 times lesser than normal
docker size.
to check msb se (docker images | head -5)
The problem with container is it is lightwieghted,shortlived the log file that we wrote will
be deleted in sometime. thats why we use
1.BIND MOUNTS->In Docker, bind mounts allow you to link a specific directory or file on
your host system directly to a directory or file in the container.
2.VOLUME->In Docker, volumes are used to persist data generated by and used in
containers, making it available even after a container is stopped or removed.
1. docker volume ls 2. docker volume create shubham 3.for details use(docker volume
inspect shubham) 4. for delete use(docker volume argocd shubham) 5. for run use
(docker run -d --mount source=abhishek,target=/app nginx:latest) 5. (docker ps) it will
give list of running container
Docker Network-> networking allow containers to communicate with each other.
1. Bridge Networking-> Container communicate with host/system with the help of
bridge networking.
2. Host Networking-> sometimes conatiner use the ip adress of host with the help of
docker and communicate with it.
3. Overlay Networking->suppose u hv very secure info. and stored it into diff containers
and u do not want to connect containers so docker using (veth) to made a custom
bridge.(sepreate bridge for secure container).