DevOps 108: Understanding the Different AWS Compute Services - EC2, Lambda, Batch, Elastic Beanstalk, and Fargate
Amazon Web Services (AWS) offers a wide range of cloud computing services to its customers. AWS Compute is one of the core services provided by AWS, which enables users to run applications and workloads on the cloud without worrying about the underlying infrastructure. AWS Compute is a set of services that provides scalable compute resources to users cost-effectively. In this blog, we will discuss AWS Compute in detail.
AWS Compute provides scalable compute resources to users, which can be scaled up or down based on the demand. AWS Compute includes a range of services such as Amazon EC2, AWS Lambda, AWS Batch, AWS Elastic Beanstalk, and more.
The following are the key services that are part of AWS Compute:
1. Amazon Elastic Compute Cloud (EC2)
Amazon Elastic Compute Cloud (EC2) is a web service that provides resizable compute capacity in the cloud.
EC2 enables users to launch virtual machines (VMs) on the cloud and scale the capacity up or down as needed.
EC2 provides a variety of instance types that are optimized for different use cases such as:
General Purpose: Application servers, Backend servers
Compute Optimized: Gaming servers
Memory Optimized: Process large datasets
Accelerated Computing: Graphics processing
Storage Optimized: IO-intensive operations
EC2 instances can be launched in different regions and availability zones, which enables users to deploy applications close to their end-users and provide high availability and fault tolerance.
2. AWS Lambda
AWS Lambda is a serverless computing service that allows users to run code without provisioning or managing servers. With Lambda, users can run code in response to events such as changes in data, user actions, or other triggers.
It is a fully-managed service that automatically scales the compute capacity based on the incoming request volume.
Three major parts of AWS Lambda:
Input - Some of the inputs for AWS Lambda are Event Data & API Gateway
Functions - It can be invoked by AWS Console, AWS SDKs, AWS Toolkkits, AWS CLI, Function URLs(HTTP), and Triggers.
Output - Functions make calls to downstream resources. From your code, you can make API calls to Amazon SQS, Amazon SNS, or DynamoDB.
Users are only charged for the number of requests you send to function and compute time consumed by their code(rounds up to nearest 1ms), making it a cost-effective solution for running small, short-lived functions.
3. AWS Elastic Beanstalk
AWS Elastic Beanstalk is a fully-managed service that makes it easy to deploy, manage, and scale applications in different programming languages such as Java, Go, Php, Python, Node.js, Ruby, and more.
Elastic Beanstalk automatically handles the deployment, capacity provisioning, load balancing, and scaling of the application. These resources include EC2, Auto-scaling, Application health monitoring, and Elastic load balancing.
Users can deploy their applications using different deployment options such as AWS CodeDeploy, Git, and more.
Elastic Beanstalk provides a variety of environments such as web server environments, worker environments, and more.
Application Version
Environment
Environment Configurations
Environment Tier
Web Server Environment: EC2, ELB, Security Group, Route533, Auto Scaling
Worker Environment: EC2, SQS Queue, IAM Service Role, Auto Scaling
Platform
Applications
4. AWS Batch
AWS Batch is a managed batch processing service that enables users to run batch computing workloads on the AWS Cloud. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the user's defined job queue.
AWS Batch can be used to run batch workloads such as analyzing financial risk models, media processing, engineering simulations, and more. Users can define the compute resources required for their job and AWS Batch will automatically provision and scale the resources to meet the demand.
AWS Batch Components:
Jobs - A job is classed as the unit of work that is to be run by AWS Batch
Job Definitions - These define specific parameters for the jobs themselves and dictate how the job will run
Job Queues - Jobs that are scheduled are placed into a job queue until they run
Job Scheduling - The job scheduler takes care of when a job should be run from which compute environment. It works in a FIFO manner.
Compute Environments - These are environments containing the compute resources to carry out the job.
Manage Environments
Unmanaged Environments
5. AWS Fargate
AWS Fargate is a serverless compute engine for ECS containers that allows users to run containers without managing servers or clusters.
With Fargate, users can run containers on the cloud without worrying about the underlying infrastructure.
Fargate provides a fully-managed environment that automatically scales the compute resources based on the demand.
Users are only charged for the resources consumed by their containers.
6. EC2 Container (ECS)
ECS removes the need for you to manage your own cluster management system thanks to its interaction with AWS Fargate.
With Amazon ECS there is no need to install any management or monitoring software for your cluster.
Amazon ECS Cluster:
Clusters act as a resource pool, aggregating resources such as CPU and memory.
Clusters are dynamically scalable and multiple instances can be used.
The cluster can only scale in a single region
Containers can be scheduled to be deployed across your cluster.
Instances within the cluster also have a Docker Daemon and an ECS agent.
7. Elastic Container Registry (ECR)
ECR provides a secure location to store and manage your docker images.
This is a fully managed service, so you don't need to provision any infrastructure to allow you to create this registry of docker images.
This service allows developers to push, pull and manage their library of docker images in a central and secure location.
Some of the components used in ECR:
Registry
Authorization Token
Repository
Repository Policy
Image
8. ECS for Kubernetes (EKS)
AWS provides a managed service allowing you to run Kubernetes across your AWS infra without having to take care of provisioning and running the Kubernetes management infra in what's referred to as the control plane.
You only need to provision and maintain the worker nodes.
In EKS, AWS is responsible for provisioning, scaling & managing the control plane and they do this by utilizing multiple availability zones for additional resilience.
Conclusion
AWS Compute is a set of cloud computing services that provides scalable and cost-effective compute resources to users. With services such as Amazon EC2, AWS Lambda, AWS Elastic Beanstalk, AWS Batch, and more, users can deploy and run applications and workloads on the cloud without worrying about the underlying infrastructure. AWS Compute services are fully managed and automated, providing users with scalability, cost-effectiveness, flexibility, automation, and security. AWS Compute services can be used in a variety of use cases such as web applications, data processing, and microservices.
Stay tuned for more such blogs and subscribe to my newsletter to not miss any of my blogs.
Thank you for reading! I hope you enjoyed this blog post and found it informative and valuable. If you have any questions or comments, please don't hesitate to leave them below. I'm always happy to engage with my readers and discuss various topics.
Follow my DevOps Series ๐
Reach Out To Me On Linkedin: Raj Panchal