The backbone of AWS - EC2[Hitting the nail]

You can learn networking and pre-requisites concepts to connect the dots for the whole picture. I am linking my previous post here.

EC2 = Cloud Computing.
Computing = Memory + Storage + Processor + OS + Network

Understanding AWS EC2 pricing:

After you create an instance, you don't know how much time you use it beforehand or have no future prediction about the instance usage. In this case, you should opt for On-Demand Pricing. This pricing is expensive because you don't commit from your side to how long you use the instance. If the business doesn't go as expected, you can terminate or stop the instances.


Say you predicted the period and usage you need in the future and finally arrived at one year. It would be best if you did not choose On-Demand pricing here because it is expensive and not an efficient use of cloud computing. If you commit to AWS, AWS has another pricing mechanism called Reserved Pricing. The commitment period is either one year or three years. The longer the commitment lower the cost. The discounted rate depends on how much you pay upfront.


Assume AWS can run 100 instances, but only 80 instances are running. So 20 instances are sitting idle, a loss of revenue and unutilized capacity. These 20 instances are offered at a very cheap rate. But there is a caveat here, AWS will not commit their side, and they can stop your instance at any time with 2 minutes notification. This is an oversimplified example just for the sake of understanding. This idea of pricing is called Spot instances pricing.

Spot instances are based on the bidding model.

If

On-demand price = Rs.100/hr

Market spot price = Rs. 20/hr

Your bid price = Rs.50/hr


Imagine how much you should pay for a spot instance, just a market price of Rs.20/hr. Whoa! 80% discount but they can stop you anytime.

Note: Amazon determines the spot price of every instance type in every availability zone, gradually adjusting it according to long-term supply and demand.


Understanding Instance types:

The instance types you choose depend on your application's nature.

There are,

General purpose = Mix of computing + Memory + Storage + Network resources like Web servers, development, testing, etc.

Compute-optimized = Processor-oriented jobs like Video encoding, Scientific modeling, etc.

Memory-optimized = RAM optimized = Real time data analytics.

Storage-optimized = Database access = Read/Write faster such as logs.

Accelerated computing = Includes additional hardware like GPU to do a massive amount of parallel processing like graphics processing and VFX.


Details by Instances Type:

If you initially start using AWS, you will likely use General purpose type. Their typical use is web servers and enterprise applications like CRM and ERP. They are a good mix of cost and functionality.

They come in two varieties - Fixed and Burstable performance.

Burstable = Ability to scale up your computing power. It comes with a cost.

When you have a CPU utilization peak only at a particular time, you must opt for the Burst performance instance. You earn CPU credits when the CPU is not running at maximum processing power. These credits are spent on burst load.

Suppose you need raw computing power such as scientific modeling, high-performance web servers, media transcoding, etc. You are likely to use Compute-optimized instances.

Refer here for different subcategories,

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/compute-optimized-instances.html.


AWS has comprehensive documentation on Instances type, which I link below,

Memory-optimized

Storage-optimized

Accelerated-optimized.

Note: AWS instance is still evolving and depends on the use case; hence it is essential to check their website while choosing an instance type.


Security Groups

Security Groups act as a firewall to the EC2 instances.

Properties of Security Group

  • All incoming traffics are denied by default.

  • All outgoing traffics from the EC2 instance is allowed by default.

  • We can only add allow rules. i.e., we can't say deny this incoming request.

  • We can't deny the response if a request is sent from an EC2 instance, irrespective of inbound rules.

  • SG's name should be unique for the VPC.

  • SG can be attached to many instances, or many instances can share a common SG.

  • EC2 instances and SG are independent.EC2 instances don't know what is happening in the Security Group.

  • How do we connect EC2 instances from our remote/home laptop? We need some connection between our computer and the EC2 instance. Welcome to Secure Shell(SSH). So t would help if you have an SG for SSH.

Amazon Machine Image

Back in the old days, when you bought an assembled computer, you needed an OS like Microsoft to install it on a computer. This OS comes on disk, and nowadays, it is pre-installed when you buy, and it contains some pre-configured software like MS-word, Excel, etc. When you map the same analogy to the virtual space(AWS), it is called Amazon Machine Image(AMI).

An Amazon Machine Image (AMI) is a template that contains a software configuration (for example, an operating system, an application server, and applications).
From an AMI, you launch an instance, which is a copy of the AMI running as a virtual server in the cloud.
Storage:

When you launch an instance, the instance uses AMI to launch. The AMI contains OS and related software and configs for a particular instance.

The OS and related configs need to be stored. This storage is called Root Volume.


Remember, Instance = Physical computer sitting somewhere in the world accessible through the internet(SSH).

Since that physical computer is connected to Network, it is called the host. This physical host is divided using software called hypervisors. I explained hypervisors and virtualization here.

Each host comes with storage. This storage is called Instance Store Volume. Instance store volume resides within an EC2 host.

Since instance store volume is physically on the host itself, it has more IOPS and low latency. However, if the instance is terminated, this storage will be gone, and hence the data stored will be lost.


To persist data, we need separate storage and EC2. This concept is called Elastic Block Storage Volumes(EBS Volume). This is analogous to the external hard disk, which we use in day-to-day life.


The overall picture looks like this,

Data persistence is not the issue in EBS volume as the EC2 and storage are separated. We can attach and detach EC2 instances to the EBS volume.

So Root volume can be Instance stored or EBS volume.


Understanding Cloud Storage Types:

What do we store? Data.

How to access data? Randomly or Sequentially?

What type of data? Audio, Video, DB, etc.

Does data need to be shared?


There are three basic types of storage - Block, File, and Object Storage.

Simply think of a blank disk as Block Storage. If the disk has no data, we can't present it to the user. Therefore we need a file system on a disk. The file system is hierarchy based.

Block storage + File system = File Storage.

If we don't use a file system, how do we present it to the user? We don't have a hierarchy! and all data is stored in a flat disk. Each unit is called an Object. This is useful for saving unstructured data. This is called Object-based storage.


The above explanation is an oversimplification, but that's the essence, and we can't get into hardware and working detail for this post.

Instance store and EBS volume belong to the block storage type.
Scalability and Availability:

Understanding the difference between these two remains a key to the overall picture.

Assume your EC2 instance can handle 100 network traffic, but unfortunately, on the weekend, you get 200 incoming requests. How do you handle it? If the instance can't handle it, it will crash and hence downtime.


What options do we have?

Increase the hardware capacity of the same instance to handle 200 requests (Increase Processor, RAM, Storage, Network, etc.).This is called Vertical Scaling. You get loads only on the weekend, so most of the time, it will sit idle, and you pay for sitting idle. This is not a viable option.


The second option would be increasing the instances(say, 2 instances) to handle more requests. This is called Horizontal Scaling. You might think we still have to pay for an extra instance when it is idle. Cloud computing is based on paying for what we use. So AWS must have a way around it, and we will elaborate on this later.


Your instance sits inside a region and within the availability zone(AZ). Suppose on a certain day, there is a power outage in your AZ, you can't handle any request. You are not available to handle the requests. This concept is called Availability. The solution would be placing your instances across AZs. Again we will have to pay for one sitting idle.


AWS service that handles scalability and availability is called Elastic Load Balancer(ELB).

  • ELB scales up or down automatically(Auto scaling).

  • It performs health checks of connected instances(whether it responds properly to the requests).

  • It sends requests only to the working instances and not to faulty ones.

  • Any request from the user will pass through ELB, and ELB forwards it to the appropriate instances across AZs. ELB is the single point of contact for the user.

  • ELB balances only across AZs and not across regions. So ELB is a regional service.


If you need to balance across-region service, you obviously need global service. We leverage one global service called Route53. We will deep dive later.


Remembering terminologies:

When the load balancer is configured, we will tell what request comes, like protocol, ports, etc., and where to forward them respectively, and this concept is called Listener and Target Groups, respectively.