c a n d l a n d . n e t

Ruby on Rails deployments to Elastic Beanstalk 2021

Dusty Candland | | rails, docker, ruby, elasticbeanstalk, aws, make, docker-componse

I've run a number of projects on Elastic Beanstalk, generally the whole experience is terrible. Still better than running servers yourself, but so far from Heroku.

All my projects end up with a bunch of .ebextensions files that try to configure the EB server for the application. They work sometimes, and almost alway break with platform upgrades, even minor upgrades.

The newer platform versions that use Amazon Linux2 are a huge step in the right direction. They are more consistent and easier to customize. Still not as simple as Heroku.

After trying the newest Ruby platform and finding it still terrible, I decided to dig into the Docker version. The new Docker version works with Docker Compose, including the docker-componse.yml configuration file.

Using this with Buildpacks.io creates a pretty good experience. There are still some issues to work around...

Container Registry

We need a place to host our images. Docker Hub could work, but since we're on AWS... AWS is it.

You need to login to push and you need to fully qualify the image in docker-componse.yml.

# Makefile
docker-login: ## Login to AWS docker repo
aws ecr get-login-password --profile my-app --region us-west-2 | docker login --username AWS --password-stdin 000000000000.dkr.ecr.us-west-2.amazonaws.com

And in the docker-compose.yml file.

version: "3.8"
services:
web:
image: "000000000000.dkr.ecr.us-west-2.amazonaws.com/my-app:production"
...

Customizing the deployment

Customizing the deployment is much easier with hooks. They live with your application code, /var/app/current, and are way easier to understand. Extending Elastic Beanstalk Linux platforms is a good place to start.

This customization is only available on Amazon Linux 2 platforms

For this project, I needed a way to get the Hostname of the machine for NSQd. The customizations go in a .platform directory that needs to get deployed with the docker-compose.yml file.

One thing that tripped me up, there are two deployment modes, one for deploys and one for configuration deploys. For example when you add or update environment variables, that's a configuration deploy.

Under .platform, I need the same script in both the hooks directory and the confighooks directory. In there we want to hook into the prebuild stage. so this file goes into prebuild.

This writes to the eb-engine.log, which is standardized and more consistent on Amazon Linux 2, and then writes a host.env file. We tell docker to use that file for the nsqd service.

#!/bin/bash

echo "[CONFIG HOOK] setting HOST_HOSTNAME to $(hostname) in $(pwd)/host.env" >> /var/log/eb-engine.log
echo "HOST_HOSTNAME=$(hostname)" > "host.env"
.platform
├── confighooks
│ └── prebuild
│ └── 01_env_setup.sh
└── hooks
└── prebuild
└── 01_env_setup.sh

This structure is where all the server configuration should be. On the plus side, I only needed this for NSQd, and otherwise wouldn't need any customization.

Environment variables and updates

Most other environment variables can be set using the eb setenv command. We still need to tell docker about the ones we care about. You'll see that in the docker-compose.yml file.

I have staging and production environments setup, so to make sure production doesn't grab an image that isn't ready, I tag the images accordingly. I use a Makefile to help with this and a docker-compose.yml template.

# docker-componse.template.yml
version: "3.8"
services:
web:
image: "000000000000.dkr.ecr.us-west-2.amazonaws.com/my-app:^TAG"
ports:
- "80:5000"
volumes:
- "${EB_LOG_BASE_DIR}/web:/workspace/log"
depends_on:
- nsqd
environment:
PORT: 5000
NODE_ENV: "production"
RACK_ENV: "${RAILS_ENV}"
RAILS_ENV: "${RAILS_ENV}"
RAILS_SERVE_STATIC_FILES: "true"
RAILS_MASTER_KEY: "${RAILS_MASTER_KEY}"
NSQ_LOOKUPD_TCP_ADDRESS: "${NSQ_LOOKUPD_TCP_ADDRESS}"
NSQ_LOOKUPD_HTTP_ADDRESS: "${NSQ_LOOKUPD_HTTP_ADDRESS}"
NSQD_TCP_ADDRESS: "nsqd:4150"
sidekiq:
image: "000000000000.dkr.ecr.us-west-2.amazonaws.com/my-app:^TAG"
entrypoint: sidekiq
volumes:
- "${EB_LOG_BASE_DIR}/sidekiq:/workspace/log"
depends_on:
- web
environment:
RACK_ENV: "${RAILS_ENV}"
RAILS_ENV: "${RAILS_ENV}"
NODE_ENV: "production"
RAILS_MASTER_KEY: "${RAILS_MASTER_KEY}"
NSQ_LOOKUPD_TCP_ADDRESS: "${NSQ_LOOKUPD_TCP_ADDRESS}"
NSQ_LOOKUPD_HTTP_ADDRESS: "${NSQ_LOOKUPD_HTTP_ADDRESS}"
NSQD_TCP_ADDRESS: "nsqd:4150"
nsqd:
image: "nsqio/nsq"
entrypoint: "/bin/sh -c \"/nsqd -lookupd-tcp-address ${NSQ_LOOKUPD_TCP_ADDRESS} -broadcast-address $$HOST_HOSTNAME\""
env_file: "host.env"
ports:
- 4150:4150
- 4151:4151

Important parts:

  • EB_LOG_BASE_DIR is used to output logs on the host machine. They end up in /var/log/eb-docker/containers/.
  • The image for the app needs to be full URLs if they're not on Docker Hub.
  • ^TAG is what I replace in the Makefile for the different environments.
  • services:nsqd:env_file is where we tell docker to load the host.env file we wrote with the prebuild hooks above.
  • $$HOST_HOSTNAME is used for the entrypoint to get that environment variable when the entrypoint command runs. The others get substibuted when docker-compose up is run.

Putting it all together

Everything deployment related is in a dist directory. With three main parts, the buildpack, the EB configuration, and the Makefile.

To see how I'm creating the docker image, check out Build a Docker image like Heroku.

The EB configuration is discussed above.

Leaving the Makefile.

.PHONY: help

AWS_REGION = us-west-2

APP_NAME := myapp
BUILD := $(shell git rev-parse --short HEAD)

IS_PROD := $(filter prod, $(MAKECMDGOALS))
ENV := $(if $(IS_PROD),myapp-production,myapp-docker)
image_tag := $(if $(IS_PROD),production,latest)

PRODUCTION_KEY = `cat ../config/credentials/production.key`

help:
@echo "$(APP_NAME):$(BUILD)"
@echo " Deploying to $(ENV)"
@perl -nle'print $& if m{^[a-zA-Z_-]+:.*?## .*$$}' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'

prod: ## Set the deploy target to prod
@echo "Setting environment to production"

clean:
rm docker-compose.yml

build-image: ## Build a new docker image
pack build myapp --env NODE_ENV=production --env RAILS_ENV=production --env RAILS_MASTER_KEY=$(PRODUCTION_KEY) --path ../ --buildpack ./ruby-buildpack --descriptor project.toml --builder paketobuildpacks/builder:full

tag-image: ## Tag the image for AWS
docker tag myapp:latest 000000000000.dkr.ecr.us-west-2.amazonaws.com/myapp:$(image_tag)

docker-login: ## Login to AWS docker repo
aws ecr get-login-password --profile myapp --region us-west-2 | docker login --username AWS --password-stdin 000000000000.dkr.ecr.us-west-2.amazonaws.com

deploy-image: tag-image docker-login ## Send image to AWS
docker push 000000000000.dkr.ecr.us-west-2.amazonaws.com/myapp:$(image_tag)

deploy: clean docker-compose.yml ## update the EB ENV
eb deploy $(ENV)

migrate: ## Run rails db:migrate
eb ssh $(ENV) --no-verify-ssl -n 1 -c "cd /var/app/current && sudo docker-compose exec -T web launcher 'rails db:migrate'"

docker-compose.yml: ## Make a compose file for the ENV
sed -e "s/\^TAG/$(image_tag)/g" docker-compose.template.yml > docker-compose.yml

Nothing too crazy here, but a deploy does require a few make targets. A staging deploy is done with make build-image deploy-image deploy.

The deploy needs the generated docker-compose.yml file and the .platform directory. Everything else is excluded with the .ebignore file.

Rails health check setup

So EB knows everyting is OK with the application we need a health check endpoint. This is actually for the load balancer that EB sets up. You'll need to change the endpoint from / to /healthcheck.

Don't try to change the LB setting from the EB console interface, it doesn't actually save changes, you need to change in the EC2 console.

Add the VPC IPs to the Rails hosts configuration in the config/environments/staging.rb and config/environments/production.rb files.

Also in those files, exempt the healthcheck endpoint from force SSL. The health check requests use the host machine IP address and we're terminiating SSL at the load balancer, which works since rails knows the request was proxied, the health check requests are not so we need the exemption.

...
config.force_ssl = true

config.ssl_options = {
redirect:
{
exclude: ->(request) { /healthcheck/.match?(request.path) },
},
}
...
config.hosts << IPAddr.new("172.31.0.0/16")
...

Next add a route and controller for the check.

# config/routes.rb
get "/healthcheck/", to: "health#check"

This could be a simple 'OK' message. I also wanted to make sure Sidekiq and NSQd were ok.

# app/containers/health_controller.rb
class HealthController < ApplicationController
skip_authorization_check

def check
status = 200

sidekiq_ok, sidekiq_data = sidekiq

nsqd_ok, nsqd_data = nsqd

status = 503 unless sidekiq_ok && nsqd_ok

render status: status, json: {
status: status,
}
end

private

def sidekiq
processes = Sidekiq::ProcessSet.new
[processes.count > 0, processes]
rescue => e
[false, {
message: e.message,
backtrace: e.backtrace,
},]
end

def nsqd
nsqd_tcp = ENV.fetch("NSQD_TCP_ADDRESS", "127.0.0.1:4150")
nsqd_uri = URI("tcp://#{nsqd_tcp}")
nsqd_host = nsqd_uri.host
resp = HTTP.get("http://#{nsqd_host}:4151/stats?format=json")
json = resp.parse
[resp.status == 200 && json["health"] == "OK", json]
rescue => e
[false, {
message: e.message,
backtrace: e.backtrace,
},]
end
end

With this in place the EB environments will report correct statuses.

How's it going?

After a month or so, I can say it's been way more stable then any previous Elastic Beanstalk setup I've used.

Building the docker image can be a bit slow, but doesn't seem much longer than coping all the Rails files to the server.

I'd like to setup Nginx as a local proxy for the app. This would allow some caching and other benefits of Nginx. It would also allow standard requests logs to work for Elastic Beanstalk. EB expects nginx logs to be on the machine.

Webmentions

These are webmentions via the IndieWeb and webmention.io. Mention this post from your site: