Skip to content

Compute

First PublishedByAtif Alam

AWS offers several compute options depending on how much control you need over the underlying infrastructure:

ServiceModelYou ManageAWS Manages
EC2Virtual machinesOS, runtime, app, scalingHardware, hypervisor
LambdaServerless functionsCode onlyEverything else
ECSContainer orchestrationTask definitions, appCluster (with Fargate)
EKSManaged KubernetesPods, DeploymentsControl plane

For creating an EKS cluster with Terraform (VPC, private API, managed node groups), see Kubernetes → EKS.

EC2 gives you virtual machines (called instances) that you can configure with any OS, software, and settings.

Instance types are named like m5.xlarge:

m 5 . xlarge
│ │ │
│ │ └─ Size (nano → micro → small → medium → large → xlarge → 2xlarge ...)
│ └─ Generation (higher = newer hardware)
└─ Family
FamilyOptimized ForExample Use Case
t (burstable)General purpose, baseline + burstDev/test, small apps, microservices
m (general)Balanced compute/memory/networkWeb servers, application servers
c (compute)CPU-intensive workloadsBatch processing, encoding, scientific modeling
r (memory)Memory-intensive workloadsIn-memory databases, caches, analytics
g/p (accelerated)GPU workloadsML training, graphics rendering
i (storage)High I/O, local NVMe storageDatabases, data warehousing
Terminal window
# Launch a t3.micro instance with Amazon Linux 2023
aws ec2 run-instances \
--image-id ami-0abcdef1234567890 \ # AMI (operating system image)
--instance-type t3.micro \
--key-name my-key-pair \ # SSH key
--security-group-ids sg-0abc123 \ # Firewall rules
--subnet-id subnet-0abc123 \ # Which subnet/AZ
--iam-instance-profile Name=MyEC2Profile # IAM role
ParameterWhat It Is
AMIAmazon Machine Image — the OS template (Amazon Linux, Ubuntu, Windows, custom)
Key pairSSH key for remote access (.pem file). Create once, reuse across instances.
Security groupFirewall rules (inbound/outbound). Covered in Networking.
SubnetNetwork placement within your VPC. Determines the AZ.
Instance profileIAM role attached to the instance (no access keys needed).
pending → running → stopping → stopped → terminated
↘ shutting-down → terminated
  • Running — billed per second (minimum 60 seconds).
  • Stopped — no compute charge, but EBS volumes still billed.
  • Terminated — gone (EBS root volume deleted by default).
ModelDiscountCommitmentBest For
On-demandNone (full price)NoneShort-term, unpredictable workloads
ReservedUp to 72%1 or 3 yearsSteady-state, predictable workloads
Savings PlansUp to 72%1 or 3 years (flexible)Flexible commitment (can change instance type)
SpotUp to 90%None (can be interrupted)Fault-tolerant batch jobs, CI/CD runners
Dedicated HostsVariesOptionalCompliance (licensing, regulatory)

Run a script when the instance first boots:

#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
echo "<h1>Hello from $(hostname)</h1>" > /var/www/html/index.html

Pass this as --user-data file://bootstrap.sh when launching.

FeatureWhat It Does
Elastic IPA static public IP address you can attach/detach from instances
Placement groupsControl instance placement (cluster for low latency, spread for HA)
Auto ScalingAutomatically add/remove instances based on demand (CPU, request count, schedule)
Launch templatesVersioned templates for instance configuration (replaces launch configs)
AMISnapshot your configured instance as a custom image for reuse

Lambda runs your code in response to events — no servers to provision, patch, or scale. You pay only for the compute time consumed.

Event source ──► Lambda function ──► Output
(API Gateway, (your code) (response,
S3, SQS, write to DB,
schedule) send to SQS)
  1. An event triggers the function (HTTP request, file upload, message, cron schedule).
  2. Lambda creates an execution environment (or reuses a warm one).
  3. Your code runs and returns a response.
  4. You’re billed for the duration (in 1ms increments) × memory allocated.
lambda_function.py
import json
def lambda_handler(event, context):
name = event.get('queryStringParameters', {}).get('name', 'World')
return {
'statusCode': 200,
'body': json.dumps({'message': f'Hello, {name}!'})
}
Terminal window
# Deploy (zip approach)
zip function.zip lambda_function.py
aws lambda create-function \
--function-name hello \
--runtime python3.12 \
--handler lambda_function.lambda_handler \
--zip-file fileb://function.zip \
--role arn:aws:iam::123456789012:role/LambdaExecutionRole
# Invoke
aws lambda invoke --function-name hello \
--payload '{"queryStringParameters":{"name":"Alice"}}' output.json
SourceUse Case
API GatewayHTTP APIs (REST, WebSocket)
S3Process file uploads (resize images, parse CSVs)
SQSProcess queue messages
DynamoDB StreamsReact to database changes
EventBridge (schedule)Cron jobs (rate(1 hour), cron(0 9 * * ? *))
SNSFan-out notifications
KinesisStream processing
LimitValue
TimeoutMax 15 minutes
Memory128 MB – 10,240 MB
Package size50 MB (zip), 250 MB (unzipped), 10 GB (container image)
Concurrent executions1,000 per region (soft limit, can be increased)
/tmp storage512 MB – 10,240 MB
  • Keep functions small and focused — one function per task.
  • Minimize cold starts — use provisioned concurrency for latency-sensitive functions, or choose lighter runtimes (Python, Node.js).
  • Use environment variables for configuration (not hardcoded values).
  • Use layers for shared libraries.
  • Set appropriate memory — more memory = more CPU = faster execution (may cost less overall).

ECS runs Docker containers on AWS. You define task definitions (like a Kubernetes pod spec) and ECS handles placement and scaling.

Launch TypeYou ManageAWS ManagesBest For
FargateTask definitions, appInfrastructure (no EC2 instances)Simplicity, most use cases
EC2EC2 instances + tasksOrchestrationFull control, GPU workloads, cost optimization at scale
┌─────────────────────────────────────┐
│ ECS Cluster │
│ ┌─────────────────────────────┐ │
│ │ Service (desired: 3) │ │
│ │ ┌──────┐ ┌──────┐ ┌──────┐│ │
│ │ │ Task │ │ Task │ │ Task ││ │
│ │ │(cont-│ │(cont-│ │(cont-││ │
│ │ │ainer)│ │ainer)│ │ainer)││ │
│ │ └──────┘ └──────┘ └──────┘│ │
│ └─────────────────────────────┘ │
└─────────────────────────────────────┘
ConceptWhat It Is
ClusterLogical grouping of tasks/services
Task definitionBlueprint for a container (image, CPU, memory, ports, env vars, IAM role)
TaskA running instance of a task definition (like a Kubernetes pod)
ServiceMaintains a desired count of tasks, integrates with load balancers, handles rolling updates
{
"family": "my-app",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"containerDefinitions": [{
"name": "app",
"image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/my-app:latest",
"portMappings": [{"containerPort": 8080}],
"environment": [
{"name": "DB_HOST", "value": "mydb.cluster-xyz.us-east-1.rds.amazonaws.com"}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/my-app",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
}
}]
}

EKS is AWS’s managed Kubernetes service. AWS runs the control plane (API server, etcd, scheduler); you manage the worker nodes and your Kubernetes workloads.

ECSEKS
OrchestratorAWS-proprietaryKubernetes (open standard)
Learning curveLower (AWS-native concepts)Higher (Kubernetes concepts)
PortabilityAWS onlyMulti-cloud, on-prem
EcosystemAWS toolingHuge K8s ecosystem (Helm, Istio, Argo, etc.)
Best forSimpler container workloads, AWS-only shopsTeams already using K8s, multi-cloud strategy
Node TypeDescription
Managed node groupsAWS manages EC2 instances (patching, scaling). You choose instance type.
FargateServerless — no EC2. Each pod runs in its own micro-VM.
Self-managedYou manage the EC2 instances entirely (most control, most effort).
Terminal window
# Create cluster (using eksctl — the official CLI tool)
eksctl create cluster \
--name my-cluster \
--region us-east-1 \
--nodegroup-name workers \
--node-type t3.medium \
--nodes 3
# Update kubeconfig
aws eks update-kubeconfig --name my-cluster --region us-east-1
# Now use kubectl as normal
kubectl get nodes
kubectl apply -f deployment.yaml
WorkloadRecommended Service
Traditional web server, full OS controlEC2
Event-driven, short-lived tasks (< 15 min)Lambda
Containerized app, simple orchestrationECS Fargate
Containerized app, Kubernetes ecosystem neededEKS
Batch processing, fault-tolerantEC2 Spot or Lambda
GPU / ML trainingEC2 (p/g instances) or SageMaker
  • EC2 gives you full control over virtual machines — choose instance type, OS, pricing model (on-demand, reserved, spot).
  • Lambda is serverless — no servers to manage, pay per invocation, max 15 minutes per execution.
  • ECS runs Docker containers with Fargate (serverless) or EC2 launch types. Good for AWS-native shops.
  • EKS is managed Kubernetes — more complex but portable and ecosystem-rich.
  • Use IAM roles (not access keys) for EC2 instances and Lambda functions.
  • Auto Scaling + load balancers handle traffic spikes for EC2 and ECS.