Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
4 views83 pages

Storage AWS CCP

A detailed look on Storage in Amazon Web Services, cloud computing

Uploaded by

gilmuchu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views83 pages

Storage AWS CCP

A detailed look on Storage in Amazon Web Services, cloud computing

Uploaded by

gilmuchu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

AMAZON S3

• What is availability?
• What is durability?
• AWS S3 is one of the most heavily used service with numerous use cases
• S3 is a managed AWS service that is extremely scalable, way beyond what can
be attained on premise
• Never the less limitations exist on the size of a single size it can support but
that cannot be perceived to be an inhibitor to a majority of its use cases
• S3 operates as an object storage service which means that every object
uploaded does not conform to a data structure or a hierarchy like a file system
would but it exists across a flat space and it is referenced by a unique URL
• S3 is a regional service so as a customer when uploading data you are required
to specify the regional location for that data to be placed
AMAZON S3
• By specifying your region, AWS S3 will duplicate the uploaded data multiple
times across multiple AZ’s within that region to increase both its durability and
availability
• S3 durability is 11 nines 99.999999999%
• AWS S3 is one of the most heavily used service with numerous use cases
• The availability of data in S3 is dependent on the storage class used
• When looking at availability AWS ensures that the uptime of S3 is between 99.
5% - 99. 99% depending on the storage class
• Durability % of storing your data refers to the possibility of storing data without
the risk of degradation, corruption or other unknown damaging effects
AMAZON S3 DEMO
• To store in S3 first define a bucket
• When uploading data into S3 a specific structure is used to locate your data in
the flat address space (think of the bucket as a container of your data)
• The bucket you specify must be unique not only within the region you specify
but globally against all other buckets that exist, and there are many millions
because of the flat address space, so you cannot duplicate the bucket names
• Once you have created your bucket, you can upload data into it
• By default your bucket can have up to 100 buckets which is a soft limit which
you may request AWS to exceed
• Any object in uploaded in your bucket is given a unique key to identify it
• In addition with your bucket, you can create folders within your bucket to aid
with categorization for easier data management
AMAZON S3 DEMO
• Folders only help with data management system and many features of AWS S3
work at bucket level and not a specific folder
• So the unique object key for every object contains the bucket, any folders
• The bucket you specify must be unique not only within the region you specify
but globally against all other buckets that exist, and there are many millions
because of the flat address space, so you cannot duplicate the bucket names
• Once you have created your bucket, you can upload data into it
• By default your bucket can have up to 100 buckets which is a soft limit which
you may request AWS to exceed
• Any object in uploaded in your bucket is given a unique key to identify it
• In addition with your bucket, you can create folders within your bucket to aid
with categorization for easier data management
Login to the management console
Find AWS S3 under the storage category and select it
Begin the bucket creation by clicking create bucket
Select the region you want for the bucket
Configure options you want for the bucket
Confirm your bucket is created
Upload an object into your bucket
Click the Upload button when inside your bucket
Add the file for to be uploaded
Set the object permissions
Set the properties
Review
Confirm that the bucket is created
Confirm the bucket properties
Confirm and set the bucket properties
Review the bucket properties then upload
Storage classes
• AWS S3 allows you to select storage classes based on performance features and costs

1. S3 STANDARD: is considered a general purpose class for storage for a wide range of
use cases where you need high throughput and low latency with the added ability of
being able to access your data frequently

• By copying data to multiple AZ’s S3 standard has 99.99999999% (eleven 9s) durability
across multiple AZ’s which means that your is protected against the failure of any one
single AZ

• S3 also offers 99.99% availability across the year the highest by S3

• From security standpoint this storage class has the added support of Secure Socket
Layer (SSL) for encrypting data in transit as well as at rest
Storage classes
• LIFE CYCLE RULES: in S3 objects can be automatically moved to another storage
class

• Life cycle rules provide an automatic method of managing the life of your data
while it is being stored in S3

• By adding a life cycle rule to your bucket you are able to configure and set a
specific criteria that automatically move your data from one class to another or
delete from Amazon S3 all together

• You may want to do this as a cost saving move or even move the data to a
cheaper storage class all together
Storage classes
2. S3 INTELLIGENT TIERING (S3 INT): this class is ideal for those circumstances
where the frequency of the access to the data is unknown and helps optimize
your storage costs
• It works effectively where you have unpredictable data access patterns,
depending on your data access patterns, objects in the S3 intelligent tiering
class
• By adding a life cycle rule to your bucket you are able to configure and set a
specific criteria that automatically move your data from one class to another or
delete from Amazon S3 all together
• Within the S3 intelligent tiering class there exists two tiers in the bucket
between which data is moved based on the objects access pattern and these
tiers are:
I. Frequent
II. Infrequent
Storage classes
• This classes are part of the intelligent tier and are separate from the S3 storage
classes themselves
• When objects are placed into the intelligent tier they are first placed into the
within Frequent Access Tier which is the more expensive of the two tiers
• If an object is not accessed for the next 30 days then AWS will automatically
move the object into the Infrequent Access Tier
• If that same object is accessed again it will automatically be moved to the
Frequent Access Tier
• Note that the availability of S3 Intelligent Tier is not as high as that of S3
standard and it is set at 99.9%
• This storage class also has the support of SSL for encrypting data in transit in
addition to encrypting data when it is at rest
• S3 INT also supports life cycle rules and matches the same performance,
throughput and low latency as S3 standard
Storage classes
3. S3 STANDARD INFREQUENT ACCESS: it can be considered as equivalent of the
Infrequent Access in Intelligent Tier since it is designed for data the Standard Tier but
still offers high throughput and low latency like S3 Standard

• Like the other S3 storage classes it carries 99.99999999% (eleven 9s) durability across
multiple AZ’s within a single region to protect against AZ outages

• It shares the same 99.99% durability with Intelligent Tier across the year the highest
by S3

• As a result it comes at a cheaper cost than S3 standard

• Common security features such as encryption is supported as well as management


controls such as life cycle rules to automatically move to an alternate storage class
based on your requirements
Storage classes
4. S3 ONE ZONE INFREQUENT ACCESS: being an Infrequent Access Storage class it is
designed for objects that are unlikely to be accessed frequently
• It also the same throughput and low latency
• However the durability although it remains at 99.999999999 (eleven 9s) it only exists
across a single AZ
• As the name implies it is one AZ
• So the objects will be copied multiple times within the same AZ instead of across
multiple AZ’s
• This results in 20% reduction compared to S3 STD
• S3 1 Zone IA does offer the lowest level of availability which currently stands at
99.95% which is due to the fact that your data is being stored in a single AZ
• Should the AZ storing the data becomes unavailable then you will loose access your
data or even worse it may be completely lost should the AZ be destroyed in a
catastrophic event
• Again, life cycle and encryption rules are in place to protect your data both in transit
and at rest
STORGE CLASSES
5. S3 GLACIER: it is used for archival of data and can be accessed separately from
Amazon S3 but interacts closely with it
• AWS S3 Glacier directly interacts with Amazon life cycle rules but the fundamentals
difference is that this storage class comes at a fraction of the cost compared with S3
storage classes come at fraction of the cost when compared with S3 storage classes
when it comes to storing the same amount of data
• The difference is that it does not provide you with the same features as Amazon S3
and more importantly it does not provide you with instant access to your data
• AWS Glacier offers an extremely low cost, long term durable storage solution usually
referred to as cold storage
• It is capable of storing any data/object like Amazon S3, however it does not provide
instant access to your data
• In addition to this there are other fundamental differences that makes the service fit
for use for other use cases
• The service has eleven 9s of durability making it just as durable as Amazon S3 and
that is achieved by replicating data across multiple AZs in the same region
STORGE CLASSES
6. S3 GLACIER: but it provides storage for the same amount of data at a considerably
lower cost than Amazon S3 and that is because the retrieval of data form Glacier is not
instant
• Retrieving data from Glacier can take several hours depending on certain criteria
• The data structure of Glacier is centered on vaults and archives but not buckets and
folders like in S3 standard with Glacier vaults acting as the container for Glacier
archives
• Vaults are regional at their creation, you will be asked to supply the region in which
you want your data/objects to reside in
• Inside this vaults you then place your data stored in them as archives, which are
similar to any other S3 objects. You can have unlimited archives in your vaults
• From a capacity perspective Glacier follows rules similar to S3
• Effectively you have access to an unlimited quantity of storage for your archives and
vaults.
• Whereas S3 provides a Graphical User Interface (GUI) to view manage and retrieve
your data from buckets and folders, Glacier does not offer this service

You might also like