Reading:
A comprehensive guide to ECS deployments - Part 5: Serving Django Static Files from S3, Adding Security & Autoscaling
Share:

A comprehensive guide to ECS deployments - Part 5: Serving Django Static Files from S3, Adding Security & Autoscaling

Avatar
by Asher
October 28, 2020
Django ECS at the Finish Line

We're almost at the finish line. This is the fifth and final article in the series to deploy a full stack on AWS ECS using Fargate.

  • Part 1:
    • A complete VPC with security groups, subnets, NAT gateways and more
  • Part 2::
    • Deploying an ECS cluster and IAM roles for Fargate services
    • Setting up a CloudFront distribution to serve static files
  • Part 3:
    • Creating a simple Django app with a celery backend to process asynchronous requests
  • Part 4:
    • Creating an RDS database & Redis instance
    • Registering the Django app in ECR and deploying it to ECS
  • Part 5 (this article):
    • Setting up Auto Scaling, HTTPs routing & Serving Static Files from CloudFront

Contents

Overview

The app is running but there are just a few places where we need to touch up the infrastructure to help it scale better and be more secure. As you're going through this tutorial, all of this code can be found in the Tree Schema ECS Example GitHub repo. And with that, let's get started!

Set up HTTPS

We want to use HTTPS for all incoming traffic that the load balancer to leverage the security that HTTPS provides over HTTP. We will eventually force all HTTP traffic to be routed through HTTP and we will use the load balancer to apply the certificate for our domain. The logical flow will look like this:

ALB Logical Flow
Application Load Balancer Logical Flow
Create the Certificate

The first step to creating the certificate is to verify that you own the domain. If you do not already have a domain, you can purchase one directly from AWS or you can use another provider and delegate the routing of traffic to AWS.

To add a certificate to AWS Certificate Manager (ACM) you can either request a new cert, which requires verifying that you own the domain, or you can import an existing certificate. I'll assume that you own the domain and you have the ability to edit the DNS configuration for your domain.

Start by requesting a public cert (this should be the only option available).

AWS Public Certificate Selection

The domain name that you enter next should correspond to your own personal domain. Since we own the domain treeschema.com I will create the certificate for all of Tree Schema. Eventaully this app will run as a subdomain at ecs-example.treeschema.com but the certificate will be created to cover all of treeschema.com, *.treeschema.com (any subdomain) and www.treeschema.com.

AWS Certificate Domain

Select email validation, add any tags you would like and then review and submit. If you are using Route53 AWS will prompt you to automatically verify the DNS configurations and the whole setup will take about two minutes to complete. If you are using a third party the verification may take up to twenty minutes.

Once completed you can grab the ARN from your cert.

AWS Cert ARN
Update the CFT for HTTPS

Now that we have an ARN for the certificate we need to plug it into the mappings section for the CFT:

AWS Cert ARN

This certificate will be used for all HTTPS traffic. In order to actually use the certificate we will need to add another listener to the load balancer. In the previous article an HTTP listener was created that forwarded traffic to the load balancer, now we will create an HTTPS listener:

AWS Cert ARN
HTTPS Listener

And now that we have an HTTPS listener we can update the HTTP listener to route all traffic to HTTPS instead of sending HTTP traffic directly to our Django app. This updated HTTP listener will send all events with an HTTP code 301 as a redirect.

AWS Cert ARN
Updated HTTP Listener

And now we have all of our traffic going to HTTPS!

Forward Traffic from Your Domain to the App

The domain is in place and the load balancer has a certificate and it forces all traffic through HTTPS. We need to tell AWS that all traffic for our domain (e.g. treeschema.com, ecs-example.treeschema.com, etc.) should be sent to our load balancer. This can be done in Route53.

Navigate to your hosted zone in Route53. If you don't have one, the setup only takes a few moments if you already have your domain and the ability to update the domain configs. Select the option to create a new record and choose simple record. I will be creating this as ecs-example.treeschema.com and therefore I have a sub-domain to enter into the record name. If you are not using a subdomain you can leave this field blank. Select the option to route traffic to an AWS ALB, choose your region and then select your load balancer. Select the record type A unless you need IPv6, for which you should choose AAAA.

AWS Cert ARN
Domain Routing Configs

That's all there is to it. This information may take 15-20 minutes to fully propagate across the internet but that will give us time to set up the rest of our app configuration before we redeploy and try it out.

Serve Static Content from S3 via CloudFront

One of the best aspects about Django is the rich user community. There are packages for everything. Two of which can be used together to allow us to serve static files from CloudFront - which will retrieve and cache the files from S3 - as well as to quickly collect and persist the static files in S3. The two packages are:

The first step is to add these two requirements to the requirements file. In order to use these features the production configuration will need to be updated to add these installed apps for Django to recognize.

            
  INSTALLED_APPS = ["collectfast", "storages"] + INSTALLED_APPS
          
          

There are few variables that need to be set in the production configuration as well to enable both the saving of static files to S3 and using CloudFront to serve the content.

            
  # Serve Content from CloudFront
  INSTALLED_APPS = ["collectfast", "storages"] + INSTALLED_APPS
  
  # Variables used by django-storages & collectfast
  STATICFILES_STORAGE = "config.settings.production.StaticRootS3Boto3Storage"
  COLLECTFAST_STRATEGY = "collectfast.strategies.boto3.Boto3Strategy"
  AWS_S3_CUSTOM_DOMAIN = env("MY_STATIC_CDN")
  AWS_STORAGE_BUCKET_NAME = env('AWS_STORAGE_BUCKET_NAME')
  
  # Required since storages doesn't pick up on Role access
  # This must be in your AWS Secrets Manager!
  AWS_ACCESS_KEY_ID = DJANGO_ECS_SECRETS['DJANGO_AWS_ACCESS_KEY_ID']
  AWS_SECRET_ACCESS_KEY = DJANGO_ECS_SECRETS['DJANGO_AWS_SECRET_ACCESS_KEY']

  # Define a custom Storage class (note,it is referenced by STATICFILES_STORAGE)
  from storages.backends.s3boto3 import S3Boto3Storage  # noqa E402
  
  class StaticRootS3Boto3Storage(S3Boto3Storage):
      location = "static"
      default_acl = "public-read"
          
          

These variables are being populated by two new environment variables so we now need to go to the ECS task deployment template and add them in. Going back to the second article, we created both the S3 bucket and CloudFront distribution for our static files. Those values can be imported from the previous stack to populate our environment variables. The new environment variables for the Django container definition looks like this:

New Django Environment Variables
New Django Environment Variables

The current way that our Django Docker container is built places all of the static files into the Docker container. While this does have the downside of bloating the docker container by adding unnecessary files (that can sometimes be quite large) to the container, it does have the benefit of making it really simple to collect static files. Once this new image with the updated configurations is built and pushed to ECR we can trigger the container to run the collectstatic command. If you bake this step into your build process it makes it a little bit easier to manage your static files because you can just put your static files into the app container and then run your static file collection. That being said, if you are making anything more complicated than this sample app it would be a worthwhile investment to remove the static content from this docker container and have a separate process for populating your static files in S3.

The following script can be used to collect your static files and move them to S3. This script will pull in the values from the CloudFormation exports to populate your task configurations. If set up properly you can execute this from your CICD pipeline or run it locally.

Django Collect Static Script
Collect Static Script

Manage Django Migrations

We need to run the migrations in production before the app will be able to save email addresses to the database. There are a million ways to do this and I prefer to have some hands-on for this activity because it can be a potentially risky operation.

To execute the migrations I'll use the same script as above but the command will be updated to use the Django migrate function:

Execute Django Migrations
Execute Django Migrations

Autoscaling the Application

One of the biggest benefits that we get by running our containerized application on ECS is that we have a lot of control over how the application will scale. We need to set up at least two resources:

  • A scaling target
  • A policy to scale up and scale down

Auto scaling policies handle both scale up and scale down to allow application to not only grow but to shrink as well when there is not as much traffic. We will create multiple policies to exemplify the fine-grained control that you can have over how and when your resources scale. For example you may want to scale up when memory usage is greater than 60% or down when CPU usage is lower than 30%.

Let's start with the scaling target, this points to our ECS service that is running our app. There is a lot of boilerplate here since I'm building string values on the fly but essentially we just need to define the desired counts, the service (e.g. our ECS app) and the role IAM role that should be used.

ECS Auto Scaling Target
Auto Scaling Target

These auto scaling targets defined will scale up when CPU utilization is greater than 75% and down when it is less than 30%. In addition it will scale up when memory usage is above 60% for 5 minutes and down when memory usage is below 20% for the same duration.

ECS Auto Scaling Policies
Auto Scaling Policies

Sending Emails with SES

You may have noticed in the previous article that we didn't deploy the Celery app that sends the emails. This is because we need to make a few minor adjustments to how we've configured Django before we can send emails in production. There are three additional steps:

  • Set boto3/SES as the email backend
  • Add the django-ses to the requirements
  • Add the required environment variables

The new SES email backend can be added to the production configurations with the following updates:

Django SES Email Configs
SES Email Configs

These configurations tell Django to use SES as it's engine to send emails and also provide information for Django to know how to connect SES and what address to place in the "from" address. The "from" email address must be validated in SES before you can send emails.

In order to use this backend you need to install the django-ses package. There is no need to add a new app to your installed apps, just placing this inside of your requirements file will do the trick:

            
  django-ses
          
          

Finally we need to add in the updated environment variables to our ECS task so that it can know how to send emails. Those additional variables can be seen here:

ECS SES Environment Variables
ECS SES Environment Variables

Make sure you use an email that belongs to you!

Adding Celery as a Long Running Service

Now that our app has been updated to be able to send emails via SES we need Celery to run in the background. In order to keep this app simple I'm going to deploy Celery as a new task in the existing container. In the real world, having a standalone service for Celery with it's own autoscaling rules would be much better.

Since Celery runs within the same Django context as our application we can reuse the same Docker container. However, when we start the Celery service we don't want the app to run, we want the Celery worker to run. To accomplish this we can just add the following file:

            
  ./deployments/django/cmds/celery_start
            
          

And within this file the contents will start the application that we previously defined in article #3 - ecs_example:

            
  #!/bin/bash
  celery -A ecs_example worker -l info              
            
          

Once we build this container again and push it to ECR we will be able to run celery by referencing this as our command instead of the previous start file. We do that by creating a new task definition and referencing celery_start in our template:

ECS Celery Task
ECS Celery Task

There is a lot in this task definition that is copy/pasted from the Django app. In fact, the only things that are different are the location of the logs, the command to start the container and the name of the container itself. I'm sure I have more to learn about ECS - primarily how to easily create a duplicate task definition that has the same set of parameters but to have the option to selectively override parameters. This current approach is especially annoying for the environment variables since they are all duplicated and they are the exact same.

Anywho, with those updates you can redeploy your template and now your ECS service will have three running containers.

ECS Running Containers
ECS Running Containers

Test it Out

When you go to ecs-example.treeschema.com or to your website you should see the app displayed.

ECS Running Django Application

Notice that we get the secured icon from HTTPS, even if we manually type in HTTP it will route us to HTTPS. Since the collectstatic script was run the javascript now works and because we ran the migrations earlier as well the table exists and the app is fully functional.

Request to Celery

And we receive the email about 15 seconds later confirming that the app is working.

Django App Email

Closing Thoughts

Hopefully you're able to get everything up and running! This walkthrough was only meant to provide a comprehensive overview of each of the components that go into creating a production ready application starting from scratch - or in this case a brand new AWS account. There are numerous places where you can take this to the next level and to optimize the infrastructure, deployment processes and the code.

Best of luck in getting your app set up!


Share this article:

Like this article? Get great articles direct to your inbox