Automating Packer with CodePipeline

Wouter van der Meulen
Wouter van der Meulen
Oct 15 2021
Posted in Engineering & Technology

An introduction to automated AMI building

Automating Packer with CodePipeline

In this quick post I want to go over creating AMI's using Packer and Codepipeline. In this example, Packer will upload a webapp to the AMI and set it up in a specific location. To keep focus on the automation, I will not go into any of the Linux setup to actually run the webapp.

This guide will assume you know the basics of Packer and AWS. If you're not familiar with Packer, you can check out our previous blog post on building AMI's with Packer.

Setting up Packer

Configuration

The packer configuration we will be building with CodePipeline will be as follows. Place this as main.pkr.chl in the root of your application directory:

locals { timestamp = regex_replace(timestamp(), "[- TZ:]", "") }

source "amazon-ebs" "webapp" {

  ami_name      = "webapp-arm64-${ local.timestamp }"
  instance_type = "t4g.micro"
  region        = "eu-central-1"

  source_ami_filter {
    filters = {
      name                = "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-arm64-server-*"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    most_recent = true
    owners      = ["099720109477"]
  }

  ssh_username = "ubuntu"

  tags = {
      Name = "WebApp"
  }
}

# a build block invokes sources and runs provisioning steps on them.
build {
  sources             = ["source.amazon-ebs.webapp"]

  # Create a directory to upload the codebase to
  # Packer will fail if the directory doesn't exist
  provisioner "shell" {
    inline = ["mkdir -p /tmp/codebase"]
  }

  # Upload the entire codebase to the AMI
  provisioner "file" {
    source        = "./"
    destination   = "/tmp/codebase/"
  }

  # Run all the required scripts
  # Place your provisioning scripts here
  provisioner "shell" {
    scripts            = [
      "./packer/scripts/unpack-codebase.sh"
    ]
  }
}

Scripts

The unpack-codebase.sh script is placed in ./packer/scripts and looks like this:

#! /bin/bash

# Create a webapp user
sudo adduser --disabled-password --gecos "" webapp

# Move the codebase from the tmp directory the app directory
sudo mv /tmp/codebase/ /var/app/codebase/
sudo chown -R webapp:webapp /var/app/
sudo chmod 755 /var/app/codebase

All this does is create a new user and move the codebase from the tmp directory into the proper directory. Of course, you don't need to use /var/app, adjust the location to fit your need.

Buildspec

Create a buidlspec.yaml and place it in the root of your project. You can use the following buildspec example.

Essentially, we will first install packer and validate the Packer configuration. If that's successful, we retrieve the necessary credentials to run Packer and then build it. Since Packer handles the creation of an AWS AMI, we do not need to save any artifacts.

Security notice: This configuration downloads binaries from the internet, always check the sources yourself before implementing them.

---
version: 0.2

phases:
  pre_build:
    commands:
      - echo "Installing HashiCorp Packer..."
      - curl -o packer.zip https://releases.hashicorp.com/packer/1.7.6/packer_1.7.6_linux_amd64.zip && unzip packer.zip -d ./bin/
      - echo "Installing jq..."
      - curl -qL -o jq https://stedolan.github.io/jq/download/linux64/jq && chmod +x ./jq
      - echo "Validating packer script"
      - ./bin/packer validate main.pkr.hcl
  build:
    commands:
      ### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
      ### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
      ### More info here: https://github.com/mitchellh/packer/issues/4279
      - echo "Configuring AWS credentials"
      - curl -qL -o aws_credentials.json http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
      - aws configure set region $AWS_REGION
      - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
      - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
      - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
      - echo "Building Packer image"
      - ./bin/packer build main.pkr.hcl
  post_build:
    commands:
      - echo "HashiCorp Packer build completed on `date`"

CodePipeline

Now for the actual CodePipeline!

First off, navigate to the CodePipeline interface and create a new Pipeline. You likely won't have a service role yet, so you can let AWS take care of that.

If you're familiar with Packer and IAM permissions, you'll likely have a custom policy already. If you're not; Packer needs access to EC2 to create, destroy and save as AMI. So you'll need to go to the IAM Management Console and attach the PowerUserAccess policy to the new role. This role provides Packer with all the permissions it needs.

Connect to your source provider and select the repo/branch you want to build.

Next up, you'll need to set up the Build phase, set the provider to CodeBuild, select your region, and choose or create a project. If you need to create a new project. You can use the default Ubuntu image. You don't need to add anything else, because we'll be using the default buildspec location. The rest will be handled by CodePipeline.

You can skip the deploy phase, as we won't be deploying this AMI anywhere in this guide. Review your changes, and finish the setup. CodePipeline should be starting its initial build.

If you've done everything correctly, the pipeline will run successfully and a new AMI should be ready in EC2. Make sure to enable CloudWatch Logs in the CodeBuild settings to see Packer's output for debugging.

That's it! As always, we hope you liked this article and if you have anything to add, don't be shy and drop a message in our Support Channel.

Keep up-to-date with the latest news