Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
78 views25 pages

What Is Amazon Inspector

Uploaded by

Amanpreet Kaur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views25 pages

What Is Amazon Inspector

Uploaded by

Amanpreet Kaur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

What is Amazon Inspector?

Amazon Inspector is an automated security assessment service to test the network


accessibility of EC2 instances. It helps you to identify vulnerabilities within your
EC2 instances and applications. And allows you to make security testing more
regular occurrence as part of the development and IT operations.

Amazon Inspector provides a clear list of security and compliance findings


assigned a priority by the severity level. Moreover, these findings can be analyzed
directly or as part of comprehensive assessment records available via the API or
AWS Inspector console. AWS Inspector security assessments help you check for
unintended network accessibility of EC2 instances and vulnerabilities on those
EC2 instances.

Benefits of AWS Inspector

Amazon Inspector is a safe and reliable service we can use for security purposes in
our services, deployed applications, etc. It’s an automated and managed service.
Let’s see some key benefits of AWS Inspector.

 Automated Service: AWS Inspector is a beneficial service for the


application’s security in the AWS cloud. It can fix automatically without the
interaction of human resources.
 Regular Security Monitoring: Amazon Inspector helps to find security
vulnerabilities in applications, as well as departures from security best
practices, both before they’ve been deployed or running in production. This
improves the overall security of your AWS-hosted applications.
 Leverage Aws Security Expertise: AWS Inspector includes a knowledge
base of the number of rules charted to common security best practices and
vulnerability definitions. It uses AWS’s Security Expertise, where AWS is
constantly updating the security best practices and rules, so one gets the best
of both worlds.
 Integrate Security Into DevOps: AWS Inspector is an API-bound service
that analyzes network configurations in your AWS account. Moreover, it uses
an optional agent for visibility into EC2 instances. The agent makes it easy to
build Inspector assessments right into your existing DevOps process and
empowers both development and operations teams to make security
assessments an essential part of the deployment process.
 Network reachability price package regulations: Assessments performed
by Amazon Inspector Classic that include network reachability rules are
priced per instance per assessment (instance assessment) per month. One
instance assessment is one that you perform against one instance. Ten
instance assessments will result from doing one assessment against ten
instances. With bulk reductions, pricing can be lowered to $0.04 per instance
assessment per month from the starting price of $0.15 per instance assessment
per month.
 Package prices for host assessment rules: The host assessment rules
packages for Amazon Inspector Classic employ an agent that is deployed on
the Amazon EC2 Instances running the apps you want to evaluate. Each
month, host rules assessments (sometimes known as “agent assessments”) are
charged per agent. A single-agent assessment is one that is performed against
a single agent. Ten agent assessments will result from running one assessment
against ten agents. With volume reductions, pricing can be lowered to as little
as $0.05 per agent assessment per month from the starting price of $0.30 per
agent assessment per month.
How Amazon Inspector Works?

Amazon Inspector performs an automatic assessment and generates a findings


report containing steps to keep the environment safe. To use this service, you need
to define the collection of AWS and all the resources that complete the application
to proceed and tested. It is followed by adding and performing security practices.
You can also set the duration of that assessment which can vary from 15 Min to 12
Hrs or last for one day.

An Inspector Agent runs on the EC2 machines hosting the application that
monitors the network, file system, and process activity. After collecting all the
required data, it is compared with the built-in security rules to identify security or
compliance issues.
AWS Trusted Advisor

AWS Trusted Advisor Trusted Advisor is an online resource that helps to reduce
cost, increase performance, and improve security by optimizing your AWS
environment.
Trusted Advisor provides real time guidance to help you provision your resources
following best practices.

Advisor will advise you on Cost Optimization, Performance, Security, and Fault
Tolerance.

Trusted Advisor scans your AWS infrastructure and compares is to AWS best
practices in five categories:

 Cost Optimization.
 Performance.
 Security.
 Fault Tolerance.
 Service Limits.

Trusted Advisor comes in two versions.

Core Checks and Recommendations (free):

 Access to the 7 core checks to help increase security and performance.


 Checks include S3 bucket permissions, Security Groups, IAM use, MFA on
root account, EBS public snapshots, RDS public snapshots.

AWS TrustedAdvisor Dashboard

TrustedAdvisor has a dashboard.

The dashboard is web-based.


Access TrustedAdvisor from the AWS Management Console.

Image created by Amazon Web Services

The dashboard gives you an overview of the completed checks and results per
category.

 Green check: no problems


 Orange triangle: recommended investigations
 Red circle: recommended actions

Introduction to AWS Simple Storage Service (AWS S3)

AWS offers a wide range of storage services that can be configured depending on
your project requirements and use cases. AWS comes up with different types of
storage services for maintaining highly confidential data, frequently accessed data,
and often accessed storage data. You can choose from various storage service types
such as Object Storage as a Service(Amazon S3), File Storage as a Service
(Amazon EFS), Block Storage as a Service (Amazon EBS), backups, and data
migration options.
What is Amazon S3?
Amazon S3 is a Simple Storage Service in AWS that stores files of different
types like Photos, Audio, and Videos as Objects providing more scalability and
security to. It allows the users to store and retrieve any amount of data at any
point in time from anywhere on the web. It facilitates features such as extremely
high availability, security, and simple connection to other AWS Services.
What is Amazon S3 Used for?
Amazon S3 is used for various purposes in the Cloud because of its robust
features with scaling and Securing of data. It helps people with all kinds of use
cases from fields such as Mobile/Web applications, Big data, Machine
Learning and many more. The following are a few Wide Usage of Amazon S3
service.
 Data Storage: Amazon s3 acts as the best option for scaling both small and
large storage applications. It helps in storing and retrieving the data-intensitive
applications as per needs in ideal time.
 Backup and Recovery: Many Organizations are using Amazon S3 to backup
their critical data and maintain the data durability and availability for recovery
needs.
 Hosting Static Websites: Amazon S3 facilitates in storing HTML, CSS and
other web content from Users/developers allowing them for hosting Static
Websites benefiting with low-latency access and cost-effectiveness. To know
more detailing refer this Article – How to host static websites using Amazon
S3
 Data Archiving: Amazon S3 Glacier service integration helps as a cost-
effective solution for long-term data storing which are less frequently accessed
applications.
 Big Data Analytics: Amazon S3 is often considered as data lake because of
its capacity to store large amounts of both structured and unstructured data
offering seamless integration with other AWS Analytics and AWS Machine
Learning Services.
What is an Amazon S3 bucket?
Amazon S3 bucket is a fundamental Storage Container feature in AWS S3
Service. It provides a secure and scalable repository for storing of Objects such as
Text data, Images, Audio and Video files over AWS Cloud. Each S3 bucket name
should be named globally unique and should be configured with ACL (Access
Control List).
How Does Amazon S3 works?
Amazon S3 works on organizing the data into unique S3 Buckets, customizing
the buckets with Acccess controls. It allows the users to store objects inside the
S3 buckets with facilitating features like versioning and lifecycle management of
data storage with scaling. The following are a few main features of Amazon s3
1. Amazon S3 Buckets and Objects
Amazon S3 Bucket: Data, in S3, is stored in containers called buckets. Each
bucket will have its own set of policies and configurations. This enables users to
have more control over their data. Bucket Names must be unique. Can be thought
of as a parent folder of data. There is a limit of 100 buckets per AWS account.
But it can be increased if requested by AWS support.
Amazon S3 Objects: Fundamental entity type stored in AWS S3.You can store
as many objects as you want to store. The maximum size of an AWS S3 bucket is
5TB. It consists of the following:
 Key
 Version ID
 Value
 Metadata
 Subresources
 Access control information
 Tags
2. Amazon S3 Versioning and Access Control
S3 Versioning: Versioning means always keeping a record of previously
uploaded files in S3. Points to Versioning are not enabled by default. Once
enabled, it is enabled for all objects in a bucket. Versioning keeps all the copies
of your file, so, it adds cost for storing multiple copies of your data. For example,
10 copies of a file of size 1GB will have you charged for using 10GBs for S3
space. Versioning is helpful to prevent unintended overwrites and deletions.
Objects with the same key can be stored in a bucket if versioning is enabled
(since they have a unique version ID). To know more about versioning refer this
article – Amazon S3 Versioning
Access control lists (ACLs) : A document for verifying access to S3 buckets
from outside your AWS account. An ACL is specific to each bucket. You can
utilize S3 Object Ownership, an Amazon S3 bucket-level feature, to manage who
owns the objects you upload to your bucket and to enable or disable ACLs.
3. Bucket policies and Life Cycles
Bucket Policies: A document for verifying the access to S3 buckets from within
your AWS account, controls which services and users have what kind of access to
your S3 bucket. Each bucket has its own Bucket Policies.
Lifecycle Rules: This is a cost-saving practice that can move your files to AWS
Glacier (The AWS Data Archive Service) or to some other S3 storage class for
cheaper storage of old data or completely delete the data after the specified time.
To know more about refer this article – Amazon S3 Life Cycle Management
4. Keys and Null Objects
Keys: The key, in S3, is a unique identifier for an object in a bucket. For example
in a bucket ‘ABC’ your GFG.java file is stored at javaPrograms/GFG.java then
‘javaPrograms/GFG.java’ is your object key for GFG.java.
Null Object: Version ID for objects in a bucket where versioning is suspended is
null. Such objects may be referred to as null objects.List) and Other settings for
managing data efficiently.
How To Use an Amazon S3 Bucket?
You can use the Amazon S3 buckets by following the simple steps which are
mentioned below. To know more how to configure about Amazon S3 refer to
the Amazon S3 – Creating a S3 Bucket.
Step 1: Login into the Amazon account with your credentials and search form S3
and click on the S3. Now click on the option which is “Create bucket” and
configure all the options which are shown while configuring.
Step 2: After configuring the AWS bucket now upload the objects into the
buckets based upon your requirement. By using the AWS console or by using
AWS CLI following is the command to upload the object into the AWS S3
bucket.
aws s3 cp <local-file-path> s3://<bucket-name>/
Step 3: You can control the permissions of the objects which was uploaded into
the S3 buckets and also who can access the bucket. You can make the bucket
public or private by default the S3 buckets will be in private mode.
Step 4: You can manage the S3 bucket lifecycle management by transitioning.
Based upon the rules that you defined S3 bucket will be transitioning into
different storage classes based on the age of the object which is uploaded into the
S3 bucket.
Step 5: You need to turn to enable the services to monitor and analyze S3. You
need to enable the S3 access logging to record who was requesting the objects
which are in the S3 buckets.
What are the types of S3 Storage Classes?
AWS S3 provides multiple storage types that offer different performance and
features and different cost structures.
 Standard: Suitable for frequently accessed data, that needs to be highly
available and durable.
 Standard Infrequent Access (Standard IA): This is a cheaper data-storage
class and as the name suggests, this class is best suited for storing infrequently
accessed data like log files or data archives. Note that there may be a per GB
data retrieval fee associated with the Standard IA class.
 Intelligent Tiering: This service class classifies your files automatically into
frequently accessed and infrequently accessed and stores the infrequently
accessed data in infrequent access storage to save costs. This is useful for
unpredictable data access to an S3 bucket.
 One Zone Infrequent Access (One Zone IA): All the files on your S3 have
their copies stored in a minimum of 3 Availability Zones. One Zone IA stores
this data in a single availability zone. It is only recommended to use this
storage class for infrequently accessed, non-essential data. There may be a per
GB cost for data retrieval.
 Reduced Redundancy Storage (RRS): All the other S3 classes ensure the
durability of 99.999999999%. RRS only ensures 99.99% durability. AWS no
longer recommends RRS due to its less durability. However, it can be used to
store non-essential data.
To know more about , refer this article – Amazon S3 Storage Classes
How to Upload and Manage Files on Amazon S3?
Firstly you have to Amazon s3 bucket for uploading and managing the files on
Amazon S3. Try to create the S3 Bucket as discussed above. Once the S3 Bucket
is created, you can upload the files through various ways such as AWS
SDKs, AWS CLI, and Amazon S3 Management Console. Try managing the files
by organizing them into folders within the S3 Bucket and applying access
controls to secure the access. Features like Versioning and Lifecycle policies
provide the management of data efficiently with optimization of storage classes.
To know more detailing refer this article – How to Store and Download Obejcts
in Amazon S3?
How to Access Amazon S3 Bucket?
You can work and access the Amazon S3 bucket by using any one of the
following methods
1. AWS Management Console
2. AWS CLI Commands
3. Programming Scripts ( Using boto3 library of Python )
1. AWS Management Console
You can access the AWS S3 bucket using the AWS management console which
is a web-based user interface. Firstly you need to create an AWS account and
login to the Web console and from there you can choose the S3 bucket option
from Amazon S3 service. ( AWS Console >> Amazon S3 >> S3 Buckets )
2. AWS CLI Commands
In this methods firstly you have to install the aws cli software in the terminal and
try on configuring the aws account with access key, secret key and the default
region. Then on taking the `aws –help` , you can figure out the s3 service usage.
For example , To view try on running following command:
aws s3 ls
3. Programming scripts
You can configure the Amazon S3 bucket by using a scripting programing
languages like Python and with using libraries such as boto3 library you can
perform the AWS S3 tasks. To know more about refer this article – How to
access Amazon S3 using python script .
AWS S3 Bucket Permissions
You can manage the permission of S3 buckets by using several methods
following are a few of them.
1. Bucket Policies: Bucket policies can be attached directly to the S3 bucket and
they are in JSON format which can perform the bucket level operations. With
the help of bucket policies, you can grant permissions to the users who can
access the objects present in the bucket. If you grant permissions to any user
he can download, and upload the objects to the bucket. You can create
the bucket policy by using Python .
2. Access Control Lists (ACLs) : ACLs are legacy access control mechanisms
for S3 buckets instead of ACLs we are using the bucket policies to control the
permissions of the S3 bucket. By using ACL you can grant the read, and
access to the S3 bucket or you can make the objects public based on the
requirements.
3. IAM Policies: IAM policies are mostly used to manage the permissions to the
users and groups and resources available in the AWS by using the IAM roles
options. You can attach an IAM policy to an IAM entity (user, group, or role)
granting them access to specific S3 buckets and operations.
The most effective way to control the permissions to the S3 buckets is by using
bucket policies.
Features of Amazon S3
 Durability: AWS claims Amazon S3 to have a 99.999999999% of durability
(11 9’s). This means the possibility of losing your data stored on S3 is one in a
billion.
 Availability: AWS ensures that the up-time of AWS S3 is 99.99% for
standard access.
o Note that availability is related to being able to access data and
durability is related to losing data altogether.
 Server-Side-Encryption (SSE): AWS S3 supports three types of SSE
models:
o SSE-S3: AWS S3 manages encryption keys.
o SSE-C: The customer manages encryption keys.
o SSE-KMS: The AWS Key Management Service (KMS) manages
the encryption keys.
 File Size support: AWS S3 can hold files of size ranging from 0 bytes to 5
terabytes. A 5TB limit on file size should not be a blocker for most of the
applications in the world.
 Infinite storage space: Theoretically AWS S3 is supposed to have infinite
storage space. This makes S3 infinitely scalable for all kinds of use cases.
 Pay as you use: The users are charged according to the S3 storage they hold.
Advantages of Amazon S3
1. Scalability: Amazon S3 can be scalable horizontally which makes it handle a
large amount of data. It can be scaled automatically without human
intervention.
2. High availability: AmazonS3 bucket is famous for its high availability nature
you can access the data whenever you required it from any region. It offers a
Service Level Agreement (SLA) guaranteeing 99.9% uptime.
3. Data Lifecycle Management: You can manage the data which is stored in the
S3 bucket by automating the transition and expiration of objects based on
predefined rules. You can automatically move the data to the Standard-IA or
Glacier, after a specified period.
4. Integration with Other AWS Services: You can integrate the S3 bucket
service with different services in the AWS like you can integrate with the
AWS lambda function where the lambda will be triggered based upon the files
or objects added to the S3 bucket.

Introduction to AWS Elastic Block Store(EBS)


AWS Storage Services: AWS offers a wide range of storage
services that can be provisioned depending on your project
requirements and use case. AWS storage services have different
provisions for highly confidential data, frequently accessed data,
and the not so frequently accessed data. You can choose from
various storage types namely, object storage, file storage, block
storage services, backups,, and data migration options. All of which
fall under the AWS Storage Services list.
Elastic Block Storage (EBS): From the aforementioned list, EBS is
a block type durable and persistent storage that can be attached to
EC2 instances for additional storage. Unlike EC2 instance storage
volumes which are suitable for holding temporary data EBS volumes
are highly suitable for essential and long term data. EBS volumes
are specific to availability zones and can only be attached to
instances within the same availability zone.
EBS can be created from the EC2 dashboard in the console as well
as in Step 4 of the EC2 launch. Just note that when creating EBS
with EC2, the EBS volumes are created in the same availability zone
as EC2, however when provisioned independently users can choose
the AZ in which EBS is required.
Features of EBS:
 Scalability: EBS volume sizes and features can be scaled as per
the needs of the system. This can be done in two ways:
o Take a snapshot of the volume and create a new volume
using the Snapshot with new updated features.
o Updating the existing EBS volume from the console.
 Backup: Users can create snapshots of EBS volumes that act as
backups.
o Snapshot can be created manually at any point in time or
can be scheduled.
o Snapshots are stored on AWS S3 and are charged
according to the S3 storage charges.
o Snapshots are incremental in nature.
o New volumes across regions can be created from
snapshots.
 Encryption: Encryption can be a basic requirement when it
comes to storage. This can be due to the government of
regulatory compliance. EBS offers an AWS managed
encryption feature.
o Users can enable encryption when creating EBS volumes
by clicking on a checkbox.
o Encryption Keys are managed by the Key Management
Service (KMS) provided by AWS.
o Encrypted volumes can only be attached to selected
instance types.
o Encryption uses the AES-256 algorithm.
o Snapshots from encrypted volumes are encrypted and
similarly, volumes created from snapshots are encrypted.
 Charges: Unlike AWS S3, where you are charged for the storage
you consume, AWS charges users for the storage you hold. For
example if you use 1 GB storage in a 5 GB volume, you’d still be
charged for a 5 GB EBS volume.
o EBS charges vary from region to region.
 EBS Volumes are independent of the EC2 instance they are
attached to. The data in an EBS volume will remain unchanged
even if the instance is rebooted or terminated.
Single EBS volume can only be attached to one EC2 instance at a
time. However, one EC2 can have more than one EBS volumes
attached to it.
 EBS volumes are specific to availability zones and can only be
attached to EC2 in the same availability zone. In case AWS’
availability zone is to go down, access to the EBS volume will be
lost.
 Can be used for rapidly changing data that needs good I/Ops.
 As compared to EC2 instance storage the control over data and
flexibility offered by EBS is far greater.
 To provide durability, EBS volumes are replicated in their
availability zone but are limited to one availability zone.
Types of EBS Volumes:
SSD: This storage type is suitable for small chunks of data that requires fast
I/Ops. SSDs can be used as root volumes for EC2 instances.
 General Purpose SSD (GP2)
o Offers a single-digit millisecond latency.
o Can provide 3000 IOps burst.
o IOps speed is limited from 3-10000 IOps.
o The throughput of these volumes is 128MBPS up to 170GB. After
which throughput increases 768KBPS per GB and peaks at
160MBPS.
 Provisioned IOPS SSD (IO1)
o These SSDs are IO intensive.
o Users can specify IOPS requirement during creation.
o Size limit is 4TB-16TB
o According to AWS claims “These volumes, if attached to EBS
optimized instances will deliver IOPS defined within 10% 99.9%
times of the year”
o Max IOPS speed is 20000.
HDD: This storage type is suitable for Big Data chunks and slower processing.
These volumes cannot be used as root volumes for EC2. AWS claims that “These
volumes provide expected throughput 99.9% times of the year”
 Cold HDD (SC1)
o SC1 is the cheapest of all EBS volume types. It is suitable for large,
infrequently accessed data.
o Max Burst speed offered is 250 Mbps
 Throughput optimized HDD (ST)
o Suitable for large, frequently accessed data.
o Burst speed ranges from 250 MBPS to 500 MBPS.

The above image shows single EBS volumes attached to their respective EC2
instances (Note that EBS cannot be shared between two volumes, however one
EFS can be attached to multiple EC2 servers). These Volumes can have a
multiple of use cases as discussed below:
 Database storage: Given the low latency and scalability offered by EBS it is
highly suitable for storing relational as well as NoSQL databases.
 Business intensive applications: Given the scheduled backup offered by EBS
as snapshots the recovery of data is quick and a refreshed system can be
rebooted efficiently with minimal data loss.
 Hard Disks for EC2 servers: EBS volumes can be used as hard drives to
your EC2 servers. They are independent of your EC2 servers and hence your
data in these volumes is safe even if EC2 servers fails/reboots/terminate
 Hosting Large Applications: EBS provides an exceptionally low latency
period amounting to a great computing power of the architecture. It can be
used to hold big enterprise application software and data.
 Root Volumes for EC2: EBS types GP2 and IO1 can be used as the root
volumes for your EC2 server.
Use of EBS in database applications:
EBS can be used to store data for database applications in a number of ways.
Some examples include:
1. As a root volume for a database instance: An EBS volume can be used as the
root volume for an Amazon EC2 instance running a database application, such
as MySQL or PostgreSQL. This allows the database application to store its
data on a persistent and highly available storage volume, rather than relying on
the ephemeral storage of the EC2 instance.
2. As a storage volume for a managed database service: AWS offers several
managed database services, such as Amazon RDS and Amazon Aurora, that
allow users to easily set up and manage a database without having to worry
about the underlying infrastructure. These services allow users to create EBS
volumes as the storage for their database, providing persistent and scalable
storage for the database data.
3. As a storage volume for containerized databases: EBS can also be used as the
storage for containerized database applications, such as those deployed using
Amazon ECS or Amazon EKS. This allows users to store their database data
on a persistent and highly available storage volume, while still taking
advantage of the benefits of running their database in a containerized
environment.
Drawbacks:
 EBS is not recommended as temporary storage.
 They cannot be used as a multi-instance accessed storage as they cannot be
shared between instances.
 The durability offered by services like AWS S3 and AWS EFS is greater.
Amazon RDS – Introduction to Amazon Relational Database System
Amazon RDS is a relational database management system along with the
facilities of the AWS cloud platform. It facilitates us in creating database
instances as per our requirements, i.e. resizable, variety of database types, etc.
What is Amazon Relational Database Service (Amazon RDS)?
Amazon Web Services offers Amazon RDS a service where it is managed
completely by AWS and also it offers wide range data base engines like the
following:
1. MySQL.
2. PostgreSQL.
3. Oracle.
4. SQL Server.
The backup of the data and the infrastructure will be taken care of by the AWS
scaling and balancing the load the security is very high the data will be encrypted
at rest can control the accesses to the data with the help of (Identity Access
Management)IAM. The DR(Disaster Recovery) will automatically take by the
AWS automatically by the AWS
How Amazon RDS Works?
Traditionally, database management used to be a very scattered service, from the
webserver to the application server and then finally to the database. For the
maintenance of such a vast system a team was required, to shrink this workforce,
AWS came across an amazing all-in-one service, RDS. The whole architecture of
RDS includes every aspect of the traditional management system, all in place.
Thus, it includes everything from EC2 (Elastic Compute Cloud) to DNS
(Domain Name System). Every part of the RDS architecture has its own separate
set of features completely different from each other. A diagrammatical
representation of RDS has been attached ahead.
Use Cases Of Amazon RDS (AWS)
Below are some use cases of Amazon RDS mostly used for secured and highly
configured applications like gaming servers and health and financial applications.
1. WebApplication: The Amazon RDS is mainly used for the backend for web
applications where it can support maximum no.of in and output operation.
And also is easy to scale up and down.
2. Managed Database: Instead of you managing the database AWS will provide
Amazon RDS as a service by just doing some configuration your database will
be available to perform the operations.
3. Isolation: You can integrate and configure multiple applications with secure
isolation by protecting the data of each application’s customers while
managing the underlying infrastructure.
4. Highly Secured: You can use Amazon RDS for domains like health care and
banking because the data used in this type of application is highly secure
which can be achieved with the help of AWS RDS.
Features Of Amazon RDS
The following are some key features of Amazon RDS:
 Availability: The “Automated Backup” feature of RDS makes the recovery
of the database instance much easier and makes it available for access quickly.
Other than that, “Database Snapshots” are user-driven backup features
initiated by Amazon RDS, which makes it easier for the user to monitor all the
alterations made on the Database Instance. These snapshots can
be shared among multiple AWS accounts in order to expand the availability of
the DB instance, along with maintaining the security of the confidential data.
 Security: While creating a new database, you have to create a password that is
totally restricted and known to you only. And by default, you are given
the “Admin role” which has the maximum authority on that particular
database. Amazon RDS also allows its users to encrypt the databases
using “keys” which is managed by KMS (Key Management Service) under
Amazon RDS.
 Backups: RDS provides us the facility to have backups. We can have backups
in multiple forms. Snapshots are basically non-editable backups used for
maintaining records. We also can create Automated Backups simply by
altering the configurations during creating the database. Reserved
instances are also another type of backup facility available here.
 Scalability: RDS enables us to automatically scale up or scale down
depending upon the number of transactions happening on your database per
minute. We can do both “Horizontal Scaling” and “Vertical Scaling”. Let us
go through the difference between both of them.
o Horizontal Scaling deals with scenarios where the amount of traffic
is increased on your database exponentially, in such cases, this
scaling comes into the picture. This simply creates multiple hardware
& software which are look-alike of the previously existing ones on
the cloud in order to tackle the traffic.
o Vertical Scaling deals with situations, where the traffic is not very
much increased but the current configurations of the hardware &
software are not able to handle the demands of the client anymore.
Using this scaling method, we are capable of adding additional
storage and processors to our pre-existing resources.

 Performance: RDS gives two SSD-backed storage options for its users,
i.e. General Purpose & Provisioned. All these variants directly impact the
level of performance of the resource and its attached services. The general
SSD is very cost-effective and is used at places where a broad workforce is
required. Provisioned, as the name suggests are designed for temporary or
lower workloads purposes.
 Pricing: RDS only asks you to pay for what you use, once you are done with a
certain resource delete it and don’t pay for it anymore. There is no compulsory
minimal charge decided for using RDS. Depending upon the Database
Engines and the type of database, a bill is calculated and sent to you at the end
of the month. For free tier accounts, special configurations are bound to
choose and you won’t get any bills if you delete all the resources you used
before logging out.
Amazon RDS Alternatives
1. MySQL – It is the 2nd most preferred open-source RDBMS in the world. It is
developed by Oracle. It is not typically cloud-based in nature like Amazon
RDS, i.e. it can be used on PC as well. It is also offered as one of the options
on RDS to choose as Database Engine. It supports five server operating
systems. The main application of MySQL is in the e-commerce domain, data
warehouse, and logging application.
2. PostgreSQL – It is one of the oldest RDBMS. It is also one of the popularly
used open-source RDBMS. It was developed by PostgreSQL Global
Development Group in 1989. It is a cross-platform software, and it supports
more operating systems as compared to others. Its primary focus is
maintaining the security of the data and it is a vast kingdom of user-defined
functions.
3. MariaDB – It is the most compatible RDBMS, and it supports both secondary
database models, i.e. Spatial & Graph. It was released in 2009,
by MariaDB Corporation Ab (MariaDB Enterprise). It supports a wide range
of programming languages and also allows users to introduce server-side
scripts. One of the best features of MariaDB is that it focuses on high-level
security in the community of MariaDB continuously finding and fixing the
issues for MariaDB.
All these alternatives are found useful for users to meet their requirements at a
certain level. AWS introduced, RDS to ensure that the ultimate control resides in
the hands of the users. RDS is not of query-driven structure rather it is more like
a console in its structure.
Amazon DynamoDB
DynamoDB allows users to create databases capable of storing and retrieving any
amount of data and comes in handy while serving any amount of traffic. It
dynamically manages each customer’s request and provides high performance by
automatically distributing data and traffic over servers. It is a fully managed
NoSQL database service that is fast, predictable in terms of performance, and
seamlessly scalable. It relieves the user from the administrative burdens of
operating and scaling a distributed database as the user doesn’t have to worry
about hardware provisioning, patching Software, or cluster scaling. AWS
DynamoDB – Creating a Table
Amazon RedShift
It is a data warehouse that is based on the cloud. Amazon Redshift has a
commercial license and is a part of Amazon’s web services. It handles large-scale
of data and is known for its scalability. It does parallel processing of multiple
data. It uses the ACID properties as its working principle and is very popular. It is
implemented in C language and has high availability. To Know more about
Amazon RedShift refer to Amazon Redshift
Steps To Configure Amazon RDS
Now, let us look at the AWS Relational Database Service management console.
Step 1: To reach, the RDS management console. First login into your AWS
account to create AWS free tier account refer to Amazon Web Services (AWS) –
Free Tier Account Set up . Once you are directed to the primary screen, at the
leftmost part of it, click on “Services”. From the long list, look for the sub-
heading “Databases” and under it, you will find “RDS”. Click on it. Here is the
image to refer to.

Step 2: Once you tap on RDS, in a while, you will be able to see the RDS
management console. Refer to the image attached ahead for a better
understanding.
This is what the RDS dashboard looks like. On the left, there is the navigation
pane to direct you to all the services under RDS. You can create your database
from here, by tapping on the orange box saying, “Create
database”. For creating a database in RDS follow the linked article.
FAQs On Amazon RDS
1. Is Amazon RDS a Data Warehouse?
Database servers in the cloud are managed by the Amazon Relational Database
Service (RDS). To access and analyse massive amounts of data, Amazon Redshift
provides data warehouse and data lake technologies.
2. What Type Of Database Is RDS?
A managed SQL database service offered by Amazon Web Services (AWS) is
called Amazon Relational Database Service (RDS).

You might also like