Unit 2 Compressed
Unit 2 Compressed
What is Cloud
The term Cloud refers to a Network or Internet. In other words, we can
say that Cloud is something, which is present at remote location.
Cloud can provide services over public and private networks, i.e.,
WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship
management (CRM) execute on cloud.
Types of Clouds
1.Public Cloud
Computing services, such as servers, storage, networking, and
applications, are provided over the internet by a third-party cloud
service provider.
These services are made available to the general public, and multiple
organizations or individuals
Examples of well-known public cloud providers include Amazon Web
Services (AWS),Microsoft Azure, Google Cloud Platform (GCP), IBM
Cloud, and Oracle Cloud.
2.Private Cloud
A private cloud refers to a cloud computing environment that is used
exclusively by a single organization.
Unlike public clouds, which are shared by multiple tenants, a private
cloud is dedicated to a specific business or entity.
This cloud deployment model provides greater control, customization,
and security for the organization's computing resources and services.
They are suitable for industries such as finance, healthcare, and
government, where regulatory compliance and data privacy are critical
considerations.
3.Hybrid Cloud
A hybrid cloud is a computing environment that combines elements of
both public and private clouds, allowing data and applications to be
shared between them.
In a hybrid cloud model, organizations can leverage the advantages of
both public and private clouds to meet their specific business
requirements.
Hybrid clouds are well-suited for organizations that have a mix of
workloads with varying requirements for performance, security, and
compliance
4. Community Cloud
A community cloud is a cloud computing model that is shared by
multiple organizations with common interests, concerns, or compliance
requirements.
This model is designed to meet the specific needs of a community of
users while providing more control and customization than a public
cloud.
Community clouds are particularly well-suited for industries or
sectors that have shared regulatory requirements or face similar
challenges. Examples include healthcare, finance, government, and
research institutions
Amazon Lex V2 is an AWS service for building conversational interfaces for applications using
voice and text.
Amazon Lex V2 enables any developer to build conversational bots quickly.
With Amazon Lex V2, no deep learning expertise is necessary—to create a bot, you specify
the basic conversation flow in the Amazon Lex V2 console.
Amazon Lex V2 manages the dialog and dynamically adjusts the responses in the
conversation.
Using the console, you can build, test, and publish your text or voice chatbot.
You can then add the conversational interfaces to bots on mobile devices, web applications,
and chat platforms (for example, Facebook Messenger).
Amazon Lex V2 provides integration with AWS Lambda, and you can integrate with many
other services on the AWS platform, including Amazon Connect, Amazon Comprehend, and
Amazon Kendra.
1. Bot
A bot performs automated tasks such as ordering a pizza, booking a hotel, ordering flowers,
and so on.
An Amazon Lex bot is powered by Automatic Speech Recognition (ASR) and Natural
Language Understanding (NLU) capabilities.
Amazon Lex bots can understand user input provided with text or speech and converse in
natural language.
You can create Lambda functions and add them as code hooks in your intent configuration
to perform user data validation and fulfillment tasks.
Each bot must have a unique name within your account.
2. Intent
An intent can require zero or more slots or parameters. You add slots as part of the intent
configuration
At runtime, Amazon Lex prompts the user for specific slot values. The user must provide
values for all required slots before Amazon Lex can fulfill the intent.
For example, the OrderPizza intent requires slots such as pizza size, crust type, and number
of pizzas.
In the intent configuration, you add these slots.
For each slot, you provide slot type and a prompt for Amazon Lex to send to the client to
elicit data from the user.
A user can reply with a slot value that includes additional words, such as "large pizza please"
or "let's stick with small."
Slot type –
o Each slot has a type.
o You can create your custom slot types or use built-in slot types.
o Amazon Lex also provides built-in slot types. For example, AMAZON.NUMBER is a
built-in slot type that you can use for the number of pizzas ordered.
o Each slot type must have a unique name within your account.
o For example, you might create and use the following slot types for
the OrderPizza intent:
o Size – With enumeration values Small, Medium, and Large.
o Crust – With enumeration values Thick and Thin.
Following are the typical steps you perform when working with Amazon Lex:
Create a bot and configure it with one or more intents that you want to support. Configure
the bot so it understands the user's goal (intent), engages in conversation with the user to
elicit information, and fulfills the user's intent.
Test the bot. You can use the test window client provided by the Amazon Lex console.
Publish a version and create an alias.
Deploy the bot. You can deploy the bot on platforms such as mobile applications or
messaging platforms such as Facebook Messenger.
Bot creation
You create an Amazon Lex V2 bot to interact with your users to elicit information to
accomplish a task.
To build a bot, you need the following information:
The language that the bot uses to interact with the customer.
The intents, or goals, that the bot helps the user fulfill.
The information, or slots, that you need to gather from the user to fulfill an intent.
The type of the slots that you need from the user.
The user interaction flow within and between intents
Lambda function
A Lambda function, in the context of cloud computing, typically refers to an AWS Lambda function.
AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It allows
you to run your code without the need to provision or manage servers. You can use AWS Lambda to
execute your code in response to events, such as changes to data in an Amazon S3 bucket, updates
to a DynamoDB table, or HTTP requests via API Gateway.
Serverless:
o With AWS Lambda, you don't need to manage servers.
o The service automatically scales and provisions the required compute resources for
your functions.
Event-Driven:
o Lambda functions are often triggered by events, such as changes in AWS services or
HTTP requests.
o You define the events that trigger the execution of your function.
Support for Multiple Runtimes:
o AWS Lambda supports multiple programming languages, allowing you to write your
functions in languages such as Python, Node.js, Java, C#, and more.
Scalability:
o Lambda functions can automatically scale in response to incoming traffic.
o Each function runs in isolation, and
o multiple instances of the same function can run concurrently to handle increased
load.
This function responds with an HTTP status code of 200 and a simple body message when triggered.
You can deploy this function and configure it to be triggered by various AWS services or events.
The lambda_handler function is the entry point for Lambda functions written in Python.
response = {
'statusCode': 200,
'body': 'Hello from Lambda!'
}
return response
In this example, the lambda_handler function receives two parameters: event and context.
The event parameter contains the input data that triggered the Lambda function, and
the context parameter provides information about the runtime environment.
Remember that you can customize the logic within the function based on your specific
use case.
When deploying this Lambda function on AWS, make sure to set up the necessary permissions,
triggers, and configurations based on your use
import json
def createIceCreamOrder(event):
firstName = event['sessionState']['intent']['slots']['name']['value']['interpretedValue']
iceCreamFlavor = event['sessionState']['intent']['slots']['flavor']['value']['interpretedValue']
iceCreamSize = event['sessionState']['intent']['slots']['size']['value']['interpretedValue']
discount = event['sessionState']['sessionAttributes']['discount']
# Your custom order creation code here.
msgText = "Your Order for, " + str(iceCreamSize) + " " + str(iceCreamFlavor) + " IceCream has been
placed with Order#: 342342"
return prepareResponse(event, msgText)
def cancelIceCreamOrder(event):
# Your order cancelation code here
msgText = "Order has been canceled"
return prepareResponse(event, msgText)
if intentName == 'CreateOrderIntent':
response = createIceCreamOrder(event)
elif intentName == 'CancelOrderIntent':
response = cancelIceCreamOrder(event)
else:
raise Exception('The intent : ' + intentName + ' is not supported')
return response
1) intentName = event['sessionState']['intent']['name']
This line of code is extracting the intent name from the event object.
The event object likely contains information about the current session state, and within
that,
it's accessing the intent name.
2) firstName = event['sessionState']['intent']['slots']['name']['value']['interpretedValue']
In an AWS Lambda function, you might receive an event object as an argument. The
structure of this object can vary depending on the trigger or service invoking the Lambda
function.
This code snippet extracts the interpreted value of the 'name' slot from an event
object.
The "type": "Close" under "dialogAction" suggests that the system should close the
current dialog or session.
Intent State:
The "intent" block contains information about the intent, including its name and state.
In our example, the "state" is set to "Fulfilled," indicating that the intent has been
successfully processed or fulfilled.
Session Completion:
Using a type of "Close" often means that the Lambda function has completed the necessary
actions based on the user's intent, and the conversation is ready to be concluded.
Response Messages:
The response may include one or more messages (in the "messages" array) to provide
information back to the user.
In our example, there is a plain text message with the content specified by msgText.
------------------------------------------------------------------------------------------------------------------------------------
In the lambda_handler function, the return response statement will return the response object to
the entity that invoked the Lambda function.
This entity could be another AWS service, an API Gateway, or any other mechanism triggering the
execution of the Lambda function.
What is virtual network
Amazon VPC
• VPC customers can run code, store data, host websites, and do anything else they could
do in an ordinary private cloud, but the private cloud is hosted remotely by a public
cloud provider.
• This virtual network closely resembles a traditional network that you'd operate in your
own data center, with the benefits of using the scalable infrastructure of AWS.
• With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a
logically isolated virtual network that you've defined.
• The following diagram shows an example VPC. The VPC has one subnet in each of the
Availability Zones in the Region, EC2 instances in each subnet, and an internet gateway
to allow communication between the resources in your VPC and the internet.
Features
1. Virtual private clouds (VPC)
A VPC is a virtual network that closely resembles a traditional network that you'd
operate in your own data center. After you create a VPC, you can add subnets.
2. Subnets
A subnet is a range of IP addresses in your VPC. A subnet must reside in a single
Availability Zone. After you add subnets, you can deploy AWS resources in your VPC.
3. IP addressing
You can assign IP addresses, both IPv4 and IPv6, to your VPCs and subnets. You can
also bring your public IPv4 addresses and IPv6 GUA addresses to AWS and allocate
them to resources in your VPC, such as EC2 instances, NAT gateways, and Network
Load Balancers.
4. Routing
Use route tables to determine where network traffic from your subnet or gateway is
directed.
Although it’s not always possible, customers expect 100% uptime and have little patience for
any downtime – not even ten minutes. VPC environments provide the redundancy and other
features required to meet near-100% uptime expectations.
With nearly 100% uptime, your customers will experience a high level of reliability that will
strengthen loyalty and trust in your brand.
2. Reduced risk
A VPC will provide you with high security at the instance and subnet level.
3. Flexibility
Whether your business is growing or changing, VPCs are flexible enough to move with your
business as needed. Cloud infrastructure resources are deployed dynamically, which makes
it easy to adapt a VPC to your changing needs.
4. Cost savings
Because of the elastic nature of public clouds, you only pay for what you use. Wth a VPC,
you won’t need to pay for hardware or software upgrades and you’ll never pay for
maintenance.
Amazon Elastic Compute Cloud (Amazon EC2)
• Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable
compute capacity in the cloud. It is designed to make web-scale cloud computing easier for
developers.
• An Amazon EC2 instance is a virtual server in Amazon's Elastic Compute Cloud (EC2) for
running applications on the Amazon Web Services (AWS) infrastructure
• AWS is a comprehensive, evolving cloud computing platform; EC2 is a service that enables
business subscribers to run application programs in the computing environment
• It can serve as a practically unlimited set of virtual machines (VMs).
• With Amazon EC2, you can set up and configure the operating system and applications that
run on your instance
• Amazon provides various types of instances with different configurations of CPU, memory,
storage and networking resources to suit user needs. Each type is available in various sizes
to address specific workload requirements
• Amazon EC2’s simple web service interface allows you to obtain and configure capacity with
minimal friction. It provides you with complete control of your computing resources and lets
you run on Amazon’s proven computing environment.
• Amazon EC2 reduces the time required to obtain and boot new server instances to minutes,
allowing you to quickly scale capacity, both up and down, as your computing requirements
change.
• Amazon EC2 changes the economics of computing by allowing you to pay only for capacity
that you actually use.
• The following diagram shows a basic architecture of an Amazon EC2 instance deployed
within an Amazon Virtual Private Cloud (VPC).
o In this example, the EC2 instance is within an Availability Zone in the Region.
o The EC2 instance is secured with a security group, which is a virtual firewall that
controls incoming and outgoing traffic.
o A private key is stored on the local computer and a public key is stored on the
instance. Both keys are specified as a key pair to prove the identity of the user.
o The VPC communicates with the internet using an internet gateway.
• Benefits
Features of EBS
1. Block-Level Storage: EBS provides persistent block-level storage volumes
for use with EC2 instances. These volumes behave like raw, unformatted
block devices that can be attached to EC2 instances and used as storage
drives.
2. Elasticity and Scalability: EBS volumes can be easily created, attached,
and detached from EC2 instances. You can also scale the size and
performance characteristics of your EBS volumes dynamically to meet
the changing needs of your applications.
3. Data Persistence: EBS volumes are designed for durability and data
persistence. They are replicated within their Availability Zone to protect
against component failure, and you can also create snapshots of your
volumes to back up your data to Amazon Simple Storage Service (S3) for
long-term storage.
4. Durable snapshots : EBS offers durable snapshot capabilities. It is
possible because EBS volumes are placed in a specific availability zone
where they are automatically replicated to protect you from the failure
Amazon EBS Volume Types
There are different types of EBS volumes. The first type is
1. Solid State Drive (SSD) Volume
2. Hard Disk Drive (HDD) Volume
3. Magnetic Standard (MS) Volume
• Amazon supports File, Object and Block level storage. For file storage
you can use Elastic File System. For Object storage you can use AWS S3
and For Block storage you can use AWS EBS
• EFS can be used for application which require a shared File systems that
can be accessed by multiple computers at the same time .
• It can be used by the application that can access data using the standard
file system interface provided through the OS
• They can take advantage of the scalability and reliability of storage in
cloud without writing any new code or adjusting the applications Which
is not possible in other two storages types
• It can be used as a shared file system with EC2 Instances .Applications
running on multiple EC2 instances can access the file system at the same
time
• In EFS thousands of EC2 instances or on premises servers from multiple
availability zones can concurrently access the file system
• But in EBS only a single EC2 instance in a single availability zone can
access the data
In this illustration, the virtual private cloud (VPC) has three Availability Zones.
Because the file system is Regional, a mount target was created in each
Availability Zone. We recommend that you access the file system from a mount
target within the same Availability Zone for performance and cost reasons.
One of the Availability Zones has two subnets. However, a mount target is
created in only one of the subnets.
Benefits
S3 supports a Max file or object size of 5 terabytes and you can upload
unlimited amounts of files to S3 the only limitation really is that the file
can't be larger than five terabytes
S3 provides data protection it will actually replicate your objects or files
on multiple devices across a availability Zone which is just a data center
o In addition to that it's actually going to get replicated across
multiple availability zones as well so you can actually handle losing
an entire data center or the entire data center going down and
your files will still be intact and you'll still be able to access them
so that's the data protection that AWS provides with S3
Working: To store your data in Amazon S3
o First create a Bucket and specify a bucket name and AWS region
o Upload your data to that bucket as objects
Bucket
o A Bucket is a container for objects
o when you create a S3 bucket you're going to have to give it a
name and the name has to be unique globally across all AWS
accounts
o when a S3 bucket gets created they'll actually designate and
reserve a URL for you so that you can access the files within that
bucket. So these are our URLs they're going to have to be unique
o so once we actually create a bucket how do we access the files
within the bucket how do we add new files to the bucket well
there's three different ways to do
first one is to use the URL so this is going to be the format
of the URL
https://bucket-name.s3.region-code.amazonaws.com/key-
name
The second one is you can always use the console so you
can go in you can upload files you can download files and
Then lastly we could do it programmatically through code
so you can use the S3 SDK to manipulate files add new files
new folders and things like that
Object
o Amazon S3 is an object store that uses unique key- values to store
objects
o An object comprised of three things three pieces of information
key - represents the file name. Used to retrieve the object
value - file data. The content you are storing
version ID- It is a string that Amazon S3 generates when
you add an object to bucket . Within a bucket , a key and
version ID uniquely identify an object
Developer develops the application(Frontend & Backend) and pushed to git hub. Git hub will have
two repositories Frontend and backend.
These two repositories are going to be cloned into our EC2 instance as front end repository and
backend repository. When we are cloning this we will be injecting two files named Docker which has
six attributes and .env file.
To make it accessible to customers this front end and back end we are going to convert these
folders or project code into Docker images one is called as client the other one is called as server
In server our backend code will be converted whereas in Client our front end code will be
converted. To make it accessible we'll run this Docker image in a container. Once it is running in
docker environment the application can be accessible by the customers all the clients whoever
wants to.
This is what the entire structure is where developer is developing some code he will be pushing into
GitHub from GitHub we will be cloning onto our EC2 instance. In EC2 instance we will be installing
our Docker and in the docker we will be converting the projects into images and in the docker
containers we'll be running the applications. This is how a application running on a local server is
been deployed or migrated into a cloud environment
so let's have an Hands-On session on how to migrate a website from local server to Cloud . We are
in our Cloud AWS environment where the first thing is you need is to search for an EC2. Just click on
EC2 instance . we will launch an EC2 instance I am naming this instance as Dockerdemo . Selecting
the operating system as Ubuntu enhancing my T2 micro instance type to T2 large. Creating a keypair
I am naming it as same DockerDemoKey
In network settings I will modify VPC subnet. Auto-assign public IP I will enable it and will add
security group by clicking Add security group rule change Type to allowing All traffic and in Source
type to Anywhere. In Configuration storage enhancing my storage to 30 GB of capacity and now
click on Launch instance
Once the instance has launched just wait until it turns into running State. Once the instance state
gets into running click on Instance ID and click on Connect. Under SSH will be copying the link to get
connected to EC2 instance.
Open the command prompt move to downloads my pem key has got downloaded under
downloads . Now continue connecting to EC2 instance . Got connected and could see at certain IP
address
Now the first thing will do is cloning my backend repository from GitHub using the command git
clone https://github.com/procareer3fwd/realgrandebackend.git which is URL of your GitHub
repository. Once this has been done will be be cloning frontend from GitHub using command get
clone git clone https://github.com/procareer3fwd/realgrandefrontend.git. Now you could see that
your realgrande back end as well as realgrande front end has got downloaded
One thing before installing of Docker you need to update your ubuntu because as ubuntu is an open-
source operating system it's keep on updating its libraries and packages so to be updated we need to
update our operating system which is done using command sudo apt update .
Next it's time to install your Docker which is done using command sudo apt -y install docker.io.
This command installs your Docker on your EC2 instance. Once Docker has been installed just cross
verify whether Docker has been properly installed using command sudo docker version. If you could
see client and server then it indicates that Docker has installed properly.
Now check whether you do have any images in your Docker or not using command sudo docker
images. Don't have any docker images . To have docker image we need to inject .env and Dockerfile
So the first thing is to get into your backend directory by typing cd realgrandebackend/ and in
backend you need to inject a .env . First create .env file by using command nano .env and paste
these lines
MONGODBURL="mongodb+srv://fsd04.2hxrdca.mongodb.net/realgrande?retryWrites=true
&w=majority"
DBUSERNAME=procareer3
DBPASSWORD=ISobjBDohsFqEAqg
FRONTENDURI=”http:// 3.82.247.96”
and do remember you need to change your public IP address over here which you will get into from
your EC2 instance. Get your public IP from EC2 instance which is DockerDemo for me here . After
pasting in .env file please type ctrl+ X hit Y and hit enter to save and quit
So I was talking about the docker file just type command ls to see it. It is already present in your real
grande backend as well as frontend this is the file. Now type cat Dockerfile. Here we do have six
attributes all the attribute names will be in upper case
1. FROM node it means that to run your react applications you require some sort of
environment where operating system as well as the middle Ware should be
present.node which has nodejs in it with the operating system system , Alpine is the
operating system and nodejs is the environment or server on which your application is
going to run.So in order to just use this what I'm doing is From node.
what Docker file will do is it will search its local machine where node image is present if
it is not then it will get into dockerhub and it pulls this particular image dockerhub
which has its own operating system named Alpine and above that at the middleware
which is running that is nodejs
2. The second attribute creating my own workspace where all the files are going to be
copied under this workspace
3. Then I will be installing my npm (node package manager)
4. In the fourth command I'm exposing this back end on the port number 5,000 and will be
running
5. I will be starting this particular application using npm start
These are the six attributes which are going to be mentioned in my Docker file
Now how to build the image presently we don't have any images in our Docker how to build an
image use command sudo docker build -t backend .
It means should build based on a Docker file which is present in the current directory hit enter you
can see that it is pulling an image from dockerhub named node pull complete you can see this is step
one of six let us wait till it completes all the Steps
Step two it has created its own working directory / app step two it is creating it has created yeah it
has Exposed on port number and it has started its image
Let's see content image running in a container using command sudo docker ps .Presently not
running. To make that image to run in this container I'll be using the command
-d means in detachable mode -P the port on which it has to be exposed I can say it as 2001 colon
5000 and which image you want to run in the container the name of the image is backend as which
we had created just now the backend successfully tagged backend
Just check it out whether that is running in a container using command sudo docker ps . yes it has
its own container ID and the name of the image which is running in the container is backend it is
been exposed on port number 2001 and 5,000 is a port which is internal if I want to access then I
need to use 2001 port number. It is Port binding concept
So let's check whether we can connect to our backend through a browser paste the public IP
3.82.247.96:2001/api hit enter . Yeah we can see Json file it means that we could connect to our
backend repository
Let's redirect to our frontend now to convert our front end into an image. Type cd and type cd
realgrandefrontend. So first thing is we have to check whether we do have Docker file type ls. yes
we do it also . Now type cat Dockerfile we see has the same six attributes but this will be exposed
on port number 3,000 the frontend
once the .env file is been done then you can build your images .So Docker build
sudo docker build -t frontend .
This time your node has been already downloaded in the image node so it has not taken too much
time it will check its local repository. node image is already present in its local repository. So it will
not again get into the docker Hub and keep on downloading the node image because in our previous
back end itself we had downloaded our node image we had pulled it from the docker Hub once this
has been done then just run this frontend image and check whether you can accessyour front end or
not by typing sudo docker images. Now my image has been successfully created . If you want you
can check it it through images . You can see frontend image and node image and backend back end
and frontend we had created node we had pulled it from the docker Hub
Just run it by using command sudo docker run -d -p 80:3000 frontend and port on which you are
going to expose is 80:3000 is a port and which image you want to run , you want to run the frontend
image hit enter. Just check it out whether that has been running using Sudo docker ps . You can see
frontend has been running with a container ID and it is exposed on port number 80
let's get back to our browser copy the public IP address 3.82.247.96:2001hit enter as at is the default
Port I need not to mention my port number you can see you can access your front end you can
retrieve the data just like can't all these data are being retrivied fromthe mongodb database.You can
login, you can sign up everything
frontend
==============
tinyurl.com/cs1fekmit
git clone https://github.com/procareer3fwd/realgrandefrontend.git
Docker Architecture:
Docker consists of 3 main components:
Docker Engine: the core runtime that runs containers and manages their
lifecycle.
Docker Client: the command-line interface (CLI) that interacts with the
Docker Engine and sends commands and requests.
Docker Registry: the central repository that stores and distributes
Docker images, which are the building blocks of containers. A few
examples are Docker Hub where all the public images are maintained, or
cloud-based private docker registries like ACR (Azure Container
Registry), GCR (Google Container Registry), etc.
This will pull the latest docker image from the DockerHub.
The image will be stored in our local system.
We can specify specific tags if we want to pull older versions of
the image.
If no TAG is specified, it is considered at the latest.
2) docker images
We can see all the images in our local system by using the docker
images command.
Every image gets an IMAGE_ID that can be seen in the output of
this command.
The -a flag is used to view all the images available on the local
machine.
docker images -a
Output:
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 6efc10a0510f 11 days ago 142MB
3) docker run
Once the docker image is pulled we can run them as a container.
docker run nginx
This shows that the Nginx container is running on the local machine
using Docker Engine.
4) docker stop
We will now stop our running container.
We will pass the CONTAINER_ID to stop the container.
The container goes into an EXITED state after it is stopped.
5) docker ps
We can list the containers using the docker ps command.
This will show us the state of the container — Running, stopped,
exited, etc. Every container also gets a CONTAINER_ID as we get for
an image.
The -a flag is used to list all the containers on the local machine.
docker ps -a
Output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
62814eb60c1c nginx "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 80/tcp boring_zhukovsky
docker ps -a
Output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15ab848abe7a nginx "/docker-entrypoint.…" 32 seconds ago Up 31 seconds 80/tcp relaxed_mendeleev
7) docker exec
• We can get inside a docker container and access the terminal of the
container using the exec command.
• The -it command states we will use interactive mode to connect to
the bash shell of the container.
docker exec -it [CONTAINER_ID] /bin/bash