Cloud Computing

AWS Cloud Basics — 1

Cloud can be defined as a group of shared physical resources made available to enable consumption of compute and storage capacity primarily on an on-demand basis using internet. The shared resources can be located in multiple locations across the globe to enable accelerated access to the cloud resources like servers, databases, etc. Several service providers like Amazon (AWS), Microsoft, Google, IBM provide cloud services as of date. Currently 90% of cloud market is captured by Amazon Web Services (AWS). In this document we will discuss the basics of AWS cloud computing. The key points mentioned below can also help candidates in AWS cloud practitioner certification

Advantages of AWS cloud:

  1. Trade capital expense for variable expense

2. Economies of scale

3. Stop guessing about capacity

4. Increase speed and agility

5. Stop spending money and running data centers

6. Go global in minutes

Types of cloud computing:

1. Infrastructure as a service: User manages the servers (physical/ virtual) as well as operating system. Data center provider has no access to your server

2. Platform as a service: someone else manages the underlying hardware

3. Software as a service: Someone else manages the underlying hardware and the compute/ storage capacity. User access the software only.

4. Function as a service: Someone else writes a function, user uses the function when required.

Types of cloud computing deployments:

1. Public cloud: Common resources available for public in general. For e.g. AWS, Azure, GPC

2. Private cloud: Private network of a user/ organization with private access to cloud resources

3. Hybrid cloud: A mix of private network and public cloud network

Cloud Infrastructure:

Regions: Regions can be defined as distinct global locations comprised of multiple data centre called availability zones. Few services in AWS are region specific. Though the resources can be shared globally by the users, few services may not be applicable to a specific region based on government policies. Currently there are 16 regions globally.

Availability Zones: Availability zones are data centers in a region. AZs have the actual physical resources for cloud services. The availability zones are interconnected via high speed LAN to enable fault isolation. If in case one AZ is shut down due to any reason, at least another AZ is available to share the load. Fault isolation is the main reason for keeping at least 2 AZs in a given region.

Edge locations: Edge locations can be seen as servers kept geographically closer to users. These servers can act as cache storage also to enable faster transactions of frequently accessed data. Transfer acceleration technique is used to provide faster data transactions of new data.

Compute services in AWS cloud:

Elastic compute cloud (EC2): Elastic compute cloud or EC2 are servers provided by AWS for cloud computing. EC2 service cannot be termed as serverless. Below are the main points in AWS EC2:

1. EC2 is server based

2. It has resizable compute capacity

3. It reduces time to boot servers

4. EC2 is highly scalable

5. User pays for capacity used as per below pricing plans available in AWS:

· On-demand pricing: On-demand pricing is beneficial for first time users and for unpredictable workloads. For Linux based instances charges are per second and for Windows based instances it is per hour for now

· Reserved instances: Reserved instances are beneficial for users having applications with steady and predictable usage. Discount on per hour charges. Instances can be reserved on 1–3 years contract with AWS

· Spot instances:

Spot instances can be used when the resources are required by applications with flexible start/ end time. For e.g. if some organization requires financial audits to be done in non-working time, they can use spot instances and bid for the lowest price for resources used.

6. Users can also opt for dedicated hosts: By opting for dedicated hosts, users can have physical servers dedicated for their use. This mean no cloud usage

7. EC2 has multiple options in Instance types: F,I,G,H,T,D,R,M,C,P,X

8. When setting up EC2, 8 GB default size for Elastic Block Storage is used

9. Security groups are virtual firewalls for EC2 instances.

10. EC2 is always designed for failure. This concept is a strong fundamental of AWS cloud

11. EC2 can be accessed using Console, CLI and SDK

12. Private key required to connect to EC2

13. Identity access management (IAM) roles can be used to prevent saving credentials on EC2 instances. IAM will be discussed later in detail

Lambda: Lambda has been introduced by AWS as a part of serverless architecture in cloud. This AWS service does not use dedicated server machine as EC2 does. AWS plans to move cloud computing towards serverless architecture in future. Lambda runs your code automatically. Lambda currently is used predominantly for stateless applications. A user is charged for every 100ms their code executes and the number of times the code is triggered

Identity access management (IAM)

Identity access management or IAM, is used to create multiple unique users, groups, define policies and roles. IAM can be seen as the root account with full access. Below are some key points with respect to IAM:

  1. IAM service is Global. i.e. it not restricted to any region
  2. Users need Multiple factor authentication (MFA) (virtual/ h/w)
  3. Create individual IAM users for –

a. Programmatic access: Needs access key and secret access key

b. AWS management console access: Access key and password

4. IAM policies are defined in JSON

5. IAM can be accessed in 3 ways:

a. Console

b. CLI

c. SDK

7. Groups can store multiple users

8. Set permissions in a group by applying policies

9. IAM roles are used by another account or service to access AWS resources. For e.g. a “read-only” role can be applied to EC2 instance for accessing AWS storage services

AWS Storage

Simple Storage Service (S3)

Simple storage service or S3 in AWS is predominantly used for file storage and static website hosting. Below are some key points on S3:

  1. Easy to use
  2. Object based storage : For files and Not for OS, apps
  3. 0–5TB storage capacity
  4. Files are stored in a bucket
  5. Each bucket will have a unique namespace
  6. HTTP 200 code received when upload on S3 bucket successful
  7. Data consistency model: Read after write consistency for PUTS (new object). Eventual consistency (PUTS and DELETES) to update a file
  8. S3 is a simple key value store

9. S3 is a highly available AWS resource

10. Features of S3

11. S3 Storage Classes

12. S3 pricing is based on Storage, Requests, Data transfer out and Transfer acceleration

13. By default, S3 has no public access

14. Name of S3 bucket should be DNS compliant

15. Can be used for static website hosting

GLACIER: AWS provides another data storage system focused on data archival. Data can be transferred from S3 bucket to Glacier for archival on a periodic basis or also as a part of S3 lifecycle management. Below are the key points of Glacier:

  1. Archival service
  2. Retrieval time is 3–5 hours
  3. Deployed in multiple Availability zones
  4. User is charged for data retrieval

Elastic Block Storage (EBS)

Elastic Block Storage or EBS is used as Virtual hard disk for EC2 instances. Below are some key points on EBS:

  1. Suitable for OS
  2. Across AZs
  3. Data automatically replicated across AZs
  4. Types of root device volumes:

Show More

Leave a Reply

Back to top button