Back to blog
6 min read

Your First Terraform Deployment on AWS

Deploy a web server on AWS with Terraform - VPC, subnets, security groups, and EC2 from scratch.

AWSTerraform

The best way to learn Terraform is to deploy something real. This guide walks through creating a complete web server stack on AWS - VPC, subnet, security group, and EC2 instance running Nginx.

Terraform AWS

Prerequisites

  • AWS account with access keys
  • Terraform installed (terraform -v to verify)
  • Basic understanding of VPCs and subnets

Set Up Credentials

export AWS_ACCESS_KEY_ID="your_access_key"
export AWS_SECRET_ACCESS_KEY="your_secret_key"
export AWS_DEFAULT_REGION="us-east-1"

What We're Building

  • VPC with a public subnet
  • Internet Gateway for public access
  • Security group allowing HTTP (port 80)
  • EC2 instance running Nginx

The Terraform Code

Create a file named main.tf:

provider "aws" {
  region = "us-east-1"
}

# Get latest Amazon Linux 2 AMI
data "aws_ssm_parameter" "amzn2_linux" {
  name = "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2"
}

# VPC
resource "aws_vpc" "app" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true

  tags = { Name = "app-vpc" }
}

# Internet Gateway
resource "aws_internet_gateway" "app" {
  vpc_id = aws_vpc.app.id
}

# Public Subnet
resource "aws_subnet" "public" {
  vpc_id                  = aws_vpc.app.id
  cidr_block              = "10.0.1.0/24"
  map_public_ip_on_launch = true

  tags = { Name = "app-public-subnet" }
}

# Route Table
resource "aws_route_table" "public" {
  vpc_id = aws_vpc.app.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.app.id
  }
}

resource "aws_route_table_association" "public" {
  subnet_id      = aws_subnet.public.id
  route_table_id = aws_route_table.public.id
}

# Security Group
resource "aws_security_group" "web" {
  name        = "web-sg"
  description = "Allow HTTP inbound"
  vpc_id      = aws_vpc.app.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

# EC2 Instance
resource "aws_instance" "web" {
  ami                    = nonsensitive(data.aws_ssm_parameter.amzn2_linux.value)
  instance_type          = "t3.micro"
  subnet_id              = aws_subnet.public.id
  vpc_security_group_ids = [aws_security_group.web.id]

  user_data = <<-EOF
    #!/bin/bash
    yum update -y
    amazon-linux-extras install nginx1 -y
    systemctl start nginx
    systemctl enable nginx
    echo "<h1>Deployed with Terraform</h1>" > /usr/share/nginx/html/index.html
  EOF

  tags = { Name = "web-server" }
}

output "public_ip" {
  value = aws_instance.web.public_ip
}

Deploy It

# Initialize Terraform
terraform init

# See what will be created
terraform plan

# Create the resources
terraform apply

Type yes when prompted. Terraform creates everything and outputs the public IP:

Apply complete! Resources: 7 added, 0 changed, 0 destroyed.

Outputs:
public_ip = "3.70.13.20"

Wait a minute for the instance to boot and Nginx to start, then open the IP in your browser.

Clean Up

Don't leave resources running:

terraform destroy

What's Happening Here

Data source for AMI: Instead of hardcoding an AMI ID that might become outdated, we fetch the latest Amazon Linux 2 AMI from AWS SSM Parameter Store.

User data script: The EC2 instance runs this bash script on first boot. It installs and starts Nginx.

Security group: Allows inbound HTTP (80) from anywhere, and all outbound traffic.

Route table: Sends 0.0.0.0/0 (all traffic) to the Internet Gateway, making this a public subnet.

Extending This

This is a starting point. For production, you'd add:

  • Private subnets for databases
  • Load balancer in front of multiple instances
  • Auto Scaling Group for resilience
  • Remote state backend (S3 + DynamoDB)
  • Separate files for variables and outputs

Key Takeaways

  • Terraform tracks state - it knows what exists and what to create/destroy
  • Use data sources for dynamic values like AMI IDs
  • User data bootstraps instances but has limits - consider configuration management for complex setups
  • Always run terraform destroy when done experimenting
  • This foundation scales to complex architectures
BT

Written by Bar Tsveker

Senior CloudOps Engineer specializing in AWS, Terraform, and infrastructure automation.

Thanks for reading! Have questions or feedback?