Terraform Basics #2 - Resource Provisioning

Last Edited: 7/5/2025

This blog post introduces the basics of how to provision resources in Terraform.

DevOps

In the previous article, we discussed what Terraform is, what it's for, and the installation process and standard deployment workflow of Terraform. Hence, in this article, we'll start discussing the specifics of how we go through the workflow and provision basic resources with a simple example.

Remote Backend

Terraform code in .tf files mainly consists of three blocks: the terraform block, the provider block, and the resource block. The rerraform block defines the backend, where the state file (with the .tfstate extension) is hosted, and required provider versions. The provider block configures the providers, and the resource block defines the resources provided by those providers. The example main.tf below sets up an S3 bucket and a DynamoDB table for file locking (like mutex locking) with the default local backend using those blocks.

remote-backend/main.tf
# Terraform Block
terraform {
    # Do not need to define `backend` block for local backend
    required_providers {
        aws = {
            source = "hashicorp/aws"
            version = "-> 3.0"
        }
    }
}
 
# Provider Block (Assuming AWS CLI is configured properly)
provider "aws" {
    region = "ap-northeast-1"
}
 
# Resource Block
# Syntax: block_type resource_name user_defined_name {...params...}
 
# S3 bucket named "terraform_state"
resource "aws_s3_bucket" "terraform_state" {
    bucket        = "tf-state"
    force_destroy = true
    versioning {
        enabled = true
    }
 
    server_side_encryption_configuration {
        rule {
            apply_server_side_encryption_by_default {
                sse_algorithm = "AES256"
            }
        }
    }
}
 
# DyanmoDB table for locking named "terraform_locks"
resource "aws_dynamodb_table" "terraform_locks" {
    name         = "tf-state-locking"
    billing_mode = "PAY_PER_REQUEST"
    hash_key     = "LockID"
    attribute {
        name = "LockID"
        type = "S"
    }
}

Terraform supports basic datatypes like string, number, boolean, and list etc., and has other language features that we will discuss in the future. Using syntax like the above, we can define a backend (local), provider version, providers (AWS), and corresponding resources (an S3 bucket and a DynamoDB table). Once the resources are defined, we can check the formatting with terraform fmt --check, fix the formatting by removing the --check flag from the previous command, plan with terraform plan, and apply the configuration with terraform apply. You can confirm the successful resource creation by checking the AWS GUI.

remote-backend/main.tf
# Terraform Block
terraform {
    backend "s3" {
        bucket         = "tf-state"
        key            = "tf-infra/terraform.tfstate" # where we store .tfstate
        region         = "ap-northeast-1"
        dynamodb_table = "tf-state-locking"
        encrypt        = true
    }
 
    required_providers {
        aws = {
            source = "hashicorp/aws"
            version = "-> 3.0"
        }
    }
}

Previously, we defined an S3 bucket and a DynamoDB table with a local backend. However, we often want to set up a remote backend to host the state file with appropriate file locking for effective collaboration. To set up a remote backend, we can edit the file by adding a backend block (like the highlighted code block) and apply the file again. Terraform then automatically detects the change and prompts us to migrate the local backend to the S3 bucket, which we can confirm to set up a remote backend. This allows us to start hosting .tfstate files for other resources.

Basic Web Architecture Example

The simplest architecture for a web application would consist of a reverse proxy (load balancer), a remote server, and a relational database. A second, slightly more complex architecture would utilize an Application Load Balancer (ALB), two EC2 instances, and a managed relational database service (RDS), and we will use this architecture as an example to learn how to provision resources for basic web applications with Terraform. We can begin by setting up terraform and provider blocks, similar to the previous section, using a different backend key (e.g., webapp/terraform.tfstate).

web-app/main.tf
# EC2 Instances (Instance 1 & Instance 2)
resource "aws_instance" "instance_1" {
    ami             = "ami-0822295a729d2a28e" # Amazon Machine Image (AMI) for Ubuntu 16.04 LTS ap-northeast-1
    instance_type   = "t3.micro"
    security_groups = [aws_security_group.instances.name] # Referring to security group "instances" defined later
    # Simplest Python HTTP server serving index.html with "This is Instance 1" in it
    user_data       = <<-EOF
                #!/bin/bash
                echo "This is Instance 1" > index.html
                python3 -m http.server 8080 & 
                EOF
}
 
resource "aws_instance" "instance_2" {
    ami             = "ami-0822295a729d2a28e"
    instance_type   = "t3.micro"
    security_groups = [aws_security_group.instances.name]
    user_data       = <<-EOF
                #!/bin/bash
                echo "This is Instance 2" > index.html
                python3 -m http.server 8080 & 
                EOF
}
 
# EC2 Security Group
resource "aws_security_group" "instances" {
    name = "instance-security-group"
}
 
# EC2 Security Group Rule
resource "aws_security_group_rule" "allow_http_inbound" {
    type              = "ingress"
    security_group_id = aws_security_group.instances.id
 
    from_port   = 8080
    to_port     = 8080
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"] # allow all address
}

After defining the terraform and provider blocks, we can start defining resource configurations, beginning with the EC2 instances. We can reference an Amazon Machine Image (AMI) for Ubuntu in the ap-northeast-1 region and instantiate a t3.micro instance serving an HTML file on port 8080 like the above. (AMIs for Ubuntu can be found at Amazon EC2 AMI Locator.) As we've briefly mentioned previously, while bash scripts can be used for resource configuration, they are fairly limited (this is why we typically use configuration IaC tool). A security group called "instances" can also be created, with rules attached to allow any incoming traffic from any IP address.

web-app/main.tf
# Using default VPC and Subnet
# Using `data` for using resources already defined
data "aws_vpc" "default_vpc" {
    default = true
}
 
data "aws_subnet_ids" "default_subnet" {
    vpc_id = data.aws_vpc.default_vpc.id # we can refer to other block like this
}
 
# ALB
resource "aws_lb" "load_balancer" {
    name               = "web-app-lb"
    load_balancer_type = "application" # ALB
    subnets            = data.aws_subnet_ids.default_subnet.ids
    security_groups    = [aws_security_group.alb.id]
}
 
# ALB Security Group
resource "aws_security_group" "alb" {
  name = "alb-security-group"
}
 
# ALB Security Group Rules (Ingress and Egress)
resource "aws_security_group_rule" "allow_alb_http_inbound" {
  type              = "ingress"
  security_group_id = aws_security_group.alb.id
 
  from_port   = 80
  to_port     = 80
  protocol    = "tcp"
  cidr_blocks = ["0.0.0.0/0"]
}
 
resource "aws_security_group_rule" "allow_alb_all_outbound" {
  type              = "egress"
  security_group_id = aws_security_group.alb.id
 
  from_port   = 0
  to_port     = 0
  protocol    = "-1"
  cidr_blocks = ["0.0.0.0/0"]
}

Following the EC2 instance definitions, we can configure the load balancer to route traffic to them. This approach first specifies the use of the default VPC and subnet using a data block and then defines the ALB in the default subnet and a security group for the ALB. The security group allows all incoming traffic from any IP address on port 80 and all outgoing traffic on port 0. Next, we need to define the load balancer listener that listens for incoming traffic on port 80 and forwards HTTP traffic to the target group containing the instances.

web-app/main.tf
# ALB Listener
resource "aws_lb_listener" "http" {
  load_balancer_arn = aws_lb.load_balancer.arn
 
  port = 80
 
  protocol = "HTTP"
 
  # Return a 404 page by default
  default_action {
    type = "fixed-response"
 
    fixed_response {
      content_type = "text/plain"
      message_body = "404: page not found"
      status_code  = 404
    }
  }
}
 
# ALB Listener Rule
resource "aws_lb_listener_rule" "instances" {
  listener_arn = aws_lb_listener.http.arn
  priority     = 100
 
  condition {
    path_pattern {
      values = ["*"]
    }
  }
 
  action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.instances.arn
  }
}
 
# ALB Target Group
resource "aws_lb_target_group" "instances" {
  name     = "example-target-group"
  port     = 8080
  protocol = "HTTP"
  vpc_id   = data.aws_vpc.default_vpc.id
 
  health_check {
    path                = "/"
    protocol            = "HTTP"
    matcher             = "200"
    interval            = 15
    timeout             = 3
    healthy_threshold   = 2
    unhealthy_threshold = 2
  }
}
 
# ALB Target Attachment
resource "aws_lb_target_group_attachment" "instance_1" {
  target_group_arn = aws_lb_target_group.instances.arn
  target_id        = aws_instance.instance_1.id
  port             = 8080
}
 
resource "aws_lb_target_group_attachment" "instance_2" {
  target_group_arn = aws_lb_target_group.instances.arn
  target_id        = aws_instance.instance_2.id
  port             = 8080
}

The listener and target group configuration above doesn't include TLS termination or advanced routing, which you can configure by setting the corresponding parameters (You can check the AWS provider documentation cited at the bottom of the article). Alternatively, you can use EC2 instances running Nginx or utilize other services like Kubernetes with proper networking to define a load balancer instead of using an ALB.

web-app/main.tf
resource "aws_db_instance" "db_instance" {
  allocated_storage          = 20
  auto_minor_version_upgrade = true # specific version should be specified
  storage_type               = "standard"
  engine                     = "postgres"
  engine_version             = "12"
  instance_class             = "db.t4g.micro"
  name                       = "mydb"
  username                   = "foo"
  password                   = "foobarbaz" # hard-coded password should be avoided
  skip_final_snapshot        = true
}

Finally, we can define an RDS instance running PostgreSQL using the resource configuration shown above. In a production setting, you should specify a specific minor version and use a secrets management mechanism (which we will cover in a future article) for setting the password (and even the database name and username), along with an application running on EC2 that accesses the database. Once all resources are defined, you can run terraform apply to provision them all at the same time. Then, we can confirm successful resource creation by checking the dashboard after a short while.

Conclusion

In this article, we covered how to set up a remote backend with S3 and a DynamoDB table and how to provision basic infrastructure for a simple, small web application. For more details on the parameters of the resources and other resources offered by AWS, I recommend checking the AWS provider documentation cited below and the AWS official website explaining the specifications. You can also consult other provider documentations on Terraform Registry to utilize resources from different cloud providers.

Resources