Skip to content

Compute

First PublishedByAtif Alam

Azure offers compute options ranging from full virtual machines to fully managed serverless — choose based on how much control you need.

ServiceModelYou ManageAzure Manages
Virtual MachinesIaaSOS, runtime, app, scalingHardware, hypervisor
App ServicePaaSApp code, configOS, runtime, scaling, patching
Azure FunctionsServerlessCode onlyEverything else
AKSManaged KubernetesPods, Deployments, HelmControl plane
Container InstancesServerless containersContainer image, configInfrastructure

Azure VMs are the equivalent of AWS EC2 — fully configurable virtual machines.

VM sizes follow a naming convention: Standard_D4s_v5

Standard _ D 4 s _ v5
│ │ │ │ │
│ │ │ │ └─ Version (generation)
│ │ │ └─ Premium storage capable
│ │ └─ vCPUs
│ └─ Family
└─ Tier
FamilyOptimized ForExample Use Case
B (burstable)Baseline + burstDev/test, small web servers
D (general)Balanced compute/memoryWeb servers, app servers
E (memory)High memory-to-core ratioDatabases, in-memory analytics
F (compute)High CPU-to-memory ratioBatch processing, gaming servers
N (GPU)GPU workloadsML training, rendering
L (storage)High throughput local storageBig data, data warehousing
Terminal window
# Create a Linux VM
az vm create \
--resource-group myapp-rg \
--name my-vm \
--image Ubuntu2204 \
--size Standard_D2s_v5 \
--admin-username azureuser \
--generate-ssh-keys \
--assign-identity # enable managed identity
# Open a port
az vm open-port --resource-group myapp-rg --name my-vm --port 80
# SSH in
ssh azureuser@<public-ip>
OptionWhat It DoesProtects Against
Availability setDistributes VMs across fault/update domains in one data centerHardware failure, planned maintenance
Availability zoneDistributes VMs across physically separate data centersData center failure
VM Scale SetAuto-scaling group of identical VMsLoad spikes, instance failure
ModelDiscountCommitmentBest For
Pay-as-you-goNoneNoneShort-term, unpredictable workloads
Reserved InstancesUp to 72%1 or 3 yearsSteady-state workloads
Savings PlanUp to 65%1 or 3 years (flexible)Flexible commitment across VM sizes
Spot VMsUp to 90%None (can be evicted)Fault-tolerant batch jobs, CI/CD

Run a script when a VM boots (like EC2 user data):

Terminal window
az vm extension set \
--resource-group myapp-rg \
--vm-name my-vm \
--name customScript \
--publisher Microsoft.Azure.Extensions \
--settings '{"commandToExecute": "apt-get update && apt-get install -y nginx"}'

App Service is a fully managed PaaS for hosting web apps, APIs, and mobile backends — no infrastructure to manage.

.NET, Java, Node.js, Python, PHP, Ruby, Go, and custom containers.

Terminal window
# Create an App Service Plan (the underlying compute)
az appservice plan create \
--name myapp-plan \
--resource-group myapp-rg \
--sku B1 \ # Basic tier
--is-linux
# Create the web app
az webapp create \
--resource-group myapp-rg \
--plan myapp-plan \
--name my-webapp \
--runtime "PYTHON:3.12"
# Deploy from a Git repo
az webapp deployment source config \
--resource-group myapp-rg \
--name my-webapp \
--repo-url https://github.com/myorg/myapp \
--branch main
TierFeaturesUse Case
Free / SharedLimited CPU, no custom domainTesting
BasicCustom domain, manual scaleDev/test
StandardAuto-scale, staging slots, daily backupsProduction
PremiumLarger instances, more slots, VNet integrationHigh-traffic production

Staging slots let you deploy and test before swapping to production:

Terminal window
# Create a staging slot
az webapp deployment slot create \
--resource-group myapp-rg --name my-webapp --slot staging
# Deploy to staging
az webapp deployment source config \
--resource-group myapp-rg --name my-webapp --slot staging \
--repo-url https://github.com/myorg/myapp --branch release
# Swap staging to production (zero downtime)
az webapp deployment slot swap \
--resource-group myapp-rg --name my-webapp --slot staging

Azure Functions is the serverless compute service — write code, define a trigger, and Azure handles everything else.

TriggerFires WhenAWS Equivalent
HTTPHTTP request receivedAPI Gateway + Lambda
TimerCron scheduleEventBridge + Lambda
Blob StorageNew/modified blobS3 + Lambda
Queue StorageMessage in queueSQS + Lambda
Service BusMessage in Service BusSQS/SNS + Lambda
Event GridEvent publishedEventBridge + Lambda
Cosmos DBDocument changedDynamoDB Streams + Lambda
# function_app.py (Python v2 model)
import azure.functions as func
import json
app = func.FunctionApp()
@app.route(route="hello", methods=["GET"])
def hello(req: func.HttpRequest) -> func.HttpResponse:
name = req.params.get('name', 'World')
return func.HttpResponse(
json.dumps({"message": f"Hello, {name}!"}),
mimetype="application/json"
)
@app.timer_trigger(schedule="0 */5 * * * *", arg_name="timer")
def cleanup(timer: func.TimerRequest) -> None:
# runs every 5 minutes
perform_cleanup()
@app.blob_trigger(arg_name="blob", path="uploads/{name}", connection="AzureWebJobsStorage")
def process_upload(blob: func.InputStream) -> None:
# fires when a new blob appears in the "uploads" container
data = blob.read()
process_file(data)
PlanScalingTimeoutBest For
ConsumptionAuto (0 to N, scale to zero)5 min (max 10)Event-driven, variable traffic
PremiumPre-warmed (no cold start)60 minLow latency, VNet integration
DedicatedApp Service Plan (always running)UnlimitedSteady load, existing plan
Terminal window
# Create a function app
az functionapp create \
--resource-group myapp-rg \
--consumption-plan-location eastus \
--name my-func-app \
--runtime python \
--runtime-version 3.12 \
--storage-account mystorageacct
# Deploy code
func azure functionapp publish my-func-app

AKS is Azure’s managed Kubernetes. Azure manages the control plane (API server, etcd, scheduler); you manage the worker nodes and workloads.

AKSEKS
Control plane costFree$0.10/hour (~$73/month)
Node optionsVM Scale Sets, Spot, Virtual NodesManaged node groups, Fargate
IdentityEntra ID + Azure RBACIAM
NetworkingAzure CNI or kubenetAWS VPC CNI
MonitoringAzure Monitor / Container InsightsCloudWatch Container Insights
Terminal window
az aks create \
--resource-group myapp-rg \
--name my-cluster \
--node-count 3 \
--node-vm-size Standard_D2s_v5 \
--enable-managed-identity \
--generate-ssh-keys
# Get credentials
az aks get-credentials --resource-group myapp-rg --name my-cluster
# Use kubectl
kubectl get nodes

AKS supports multiple node pools with different VM sizes:

Terminal window
# Add a GPU node pool
az aks nodepool add \
--resource-group myapp-rg \
--cluster-name my-cluster \
--name gpupool \
--node-count 2 \
--node-vm-size Standard_NC6s_v3
# Add a Spot node pool (cheap, interruptible)
az aks nodepool add \
--resource-group myapp-rg \
--cluster-name my-cluster \
--name spotpool \
--node-count 5 \
--priority Spot \
--eviction-policy Delete \
--spot-max-price -1 # pay up to on-demand price

Virtual Nodes use Azure Container Instances (ACI) as a burst capacity layer — pods scale beyond your node pool without adding VMs:

Terminal window
az aks enable-addons --resource-group myapp-rg --name my-cluster --addons virtual-node

ACI runs containers without any infrastructure — no cluster, no nodes. Good for simple, short-lived workloads.

Terminal window
az container create \
--resource-group myapp-rg \
--name my-container \
--image myregistry.azurecr.io/my-app:latest \
--cpu 1 --memory 1.5 \
--ports 80 \
--ip-address Public

ACI is like AWS Fargate but simpler (no ECS/EKS — just a container).

WorkloadRecommended Service
Traditional server, full OS controlVirtual Machines
Web app, no infra managementApp Service
Event-driven, short-lived (< 10 min)Azure Functions (Consumption)
Containerized app, Kubernetes neededAKS
Simple container, no orchestrationContainer Instances
Batch processing, fault-tolerantVM Spot or Functions
Low-latency serverlessAzure Functions (Premium)
  • Virtual Machines give full control. Use availability zones for HA and VM Scale Sets for auto-scaling.
  • App Service is PaaS for web apps — deployment slots enable zero-downtime releases.
  • Azure Functions is serverless — Consumption plan scales to zero; Premium plan eliminates cold starts.
  • AKS is managed Kubernetes with a free control plane. Use multiple node pools for different workloads.
  • Use managed identities on all compute resources for secure, credential-free access to other Azure services.