Building AMI's with packer

Wouter van der Meulen
Wouter van der Meulen
Apr 9 2021
Posted in Engineering & Technology

Provisioning infrastructure as code

Building AMI's with packer

Creating and provisioning instances on [insert your cloud provider here] can take a lot of time. The time it takes between pressing "Launch" in that interface and having something actually running could make all the difference.

Docker is all the rage nowadays for creating easy to deploy containers. But Docker isn't always the right choice for the job. While creating a Docker image seems trivial, hosting your containers usually requires some knowledge of cloud orchestration tools such as Kubernetes.

For those teams that don't have the capacity to take on managing a Kubernetes cluster, but do want to take that step into immutable infrastructure, Hashicorp has got us covered with Packer.

In this blog post, we will take a quick look at creating a simple AWS AMI with MongoDB pre-installed.

Preface

This post aims to give you a brief introduction to packer and give a practical use-case. This is not a fully fledged tutorial.

Since nothing beats official documentation, we will let Hashicorps' documentation speak for itself. And we would advice anyone to at least go through their Getting Started pages.

The following will cover building a basic image, as well as showing the power of running scripts during AMI creation.

What is Packer?

Glad you asked!

Packer is an automation tool that allows creating system images using a clean configuration language. Which, when run, will start up an instance with the chosen cloud provider; run whatever build scripts are defined; and store a snapshot of that image as an AMI (in Amazon's case) to be used as a template later on.

The syntax looks something like this:

locals { timestamp = regex_replace(timestamp(), "[- TZ:]", "") }

# We're creating an image backed by amazon ebs
source "amazon-ebs" "mongodb" {

  ami_name      = "mongodb-packer-image-${local.timestamp}" # This will be the AMI name in AWS
  instance_type = "t4g.micro"
  region        = "eu-central-1"

  source_ami_filter {
    filters = {
      name                = "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-arm64-server-*"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    most_recent = true
    owners      = ["099720109477"]
  }
  ssh_username = "ubuntu"
}

If this looks familiar, it's because it uses the same syntax as Hashicorp's other product, Terraform. Which would allow you to boot up your newly created AMI from the same codebase. But that's a topic for a whole other blogpost.

As the code suggests, this instance is built upon Ubuntu 20.04 for ARM using a t4g.micro instance. Please note that instance_type does not limit the AMI to be booted using t4g.micro. However, it will definitely be limited to ARM servers. As well as being limited to EBS backed instances.

Ubuntu based AMI's will have different id's for the different architectures and regions they will be built for. The source_ami_filter will automatically select the right AMI to use. A specific id can also be provided. But the filter is recommended if you intend to switch between regions.

The Build Step

After setting up the data for the new AMI, it is time for the build step:

# a build block invokes sources and runs provisioning steps on them.
build {
  sources = ["source.amazon-ebs.mongodb"]

  provisioner "shell" {
    script = "./scripts/apt_upgrade.sh"
  }

  provisioner "shell" {
    script = "./scripts/install_mongo.sh"
  }
}

This step is quite simple. For the source provided, an instance will be automatically created in AWS. Packer will then run each provisioner on them. Provisioners can be anything, from a bash script (inline, or from a file) to Ansible scripts.

Instead of a script, Packer also allows you to upload files directly to the AMI. This is convenient for scripts you need on an actual instance based on your AMI. Or you could simply deploy your whole application this way.

Provisioning scripts

The scripts we use for this example are fairly simple. First of all, upgrade all the current packages before we do anything else:

#! /bin/bash

sudo apt update -y
sudo apt upgrade -y

And then, we run the following (based on the official MongoDB docs):

#! /bin/bash

# Import the keys
wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -

# Add the repository
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list

# Update the apt cache
sudo apt-get update -y

# Install mongodb-org
sudo apt-get install -y mongodb-org

# -- Optional --
# Prepare data directory
sudo mkdir -p /data
chown mongodb:mongodb -R /data

Running Packer

packer build mongodb.pkr.hcl

When you run packer build you'll see the something like this:

amazon-ebs.mongodb: output will be in this color.

==> amazon-ebs.mongodb: Prevalidating any provided VPC information
==> amazon-ebs.mongodb: Prevalidating AMI Name: mongodb-packer-image-20210408080718
    amazon-ebs.mongodb: Found Image ID: ami-xxxxxxxxxxx
==> amazon-ebs.mongodb: Creating temporary keypair: packer_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx
==> amazon-ebs.mongodb: Creating temporary security group for this instance: packer_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx
==> amazon-ebs.mongodb: Authorizing access to port 22 from [0.0.0.0/0] in the temporary security groups...
==> amazon-ebs.mongodb: Launching a source AWS instance..
==> amazon-ebs.mongodb: Adding tags to source instance
    amazon-ebs.mongodb: Adding tag: "Name": "Packer Builder"
    amazon-ebs.mongodb: Instance ID: i-xxxxxxxxxxxxx
==> amazon-ebs.mongodb: Waiting for instance (i-xxxxxxxxxxxxx) to become ready...
==> amazon-ebs.mongodb: Using ssh communicator to connect: xxx.xxx.xxx.xxx
==> amazon-ebs.mongodb: Waiting for SSH to become available...
==> amazon-ebs.mongodb: Connected to SSH!
==> amazon-ebs.mongodb: Provisioning with shell script: ./scripts/apt_upgrade.sh

And when it finishes successfully you should see:

==> amazon-ebs.mongodb: Waiting for the instance to stop...
==> amazon-ebs.mongodb: Creating AMI mongodb-packer-image-20210408080718 from instance i-xxxxxxxxxxxxx
    amazon-ebs.mongodb: AMI: ami-xxxxxxxxxxx
==> amazon-ebs.mongodb: Waiting for AMI to become ready...
==> amazon-ebs.mongodb: Terminating the source AWS instance...
==> amazon-ebs.mongodb: Cleaning up any extra volumes...
==> amazon-ebs.mongodb: No volumes to clean up, skipping
==> amazon-ebs.mongodb: Deleting temporary security group...
==> amazon-ebs.mongodb: Deleting temporary keypair...
Build 'amazon-ebs.mongodb' finished after 5 minutes 19 seconds.

==> Wait completed after 5 minutes 19 seconds

==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs.mongodb: AMIs were created:
eu-central-1: ami-xxxxxxxxxxxxxx

And there it is, Packer ends with the new ami-id that can now be used in AWS to create a new instance.

Conclusion

The image we created is just a simple example to get a MongoDB server ready to go. MongoDB still needs to be configured for use in production. While this is a simple implementation, you can make this as complex as you need it to be. You can find out more about Packer on their website.

As always, if you have any questions, suggestion or a correction to this post, don't hesitate to drop us a message.

Keep up-to-date with the latest news