Reading:
A comprehensive guide to ECS deployments - Part 4: Deploying Django & Nginx Containers on ECS with RDS & Redis
Share:

A comprehensive guide to ECS deployments - Part 4: Deploying Django & Nginx Containers on ECS with RDS & Redis

Avatar
by Asher
August 29, 2020
Django ECS Components Coming Together

This is the fourth article in the series to deploy a full stack on AWS ECS using Fargate... the pieces are coming together.

  • Part 1:
    • A complete VPC with security groups, subnets, NAT gateways and more
  • Part 2:
    • Deploying an ECS cluster and IAM roles for Fargate services
    • Setting up a CloudFront distribution to serve static files
  • Part 3:
    • Creating a simple Django app with a celery backend to process asynchronous requests
  • Part 4 (this article):
    • Creating an RDS database & Redis instance
    • Registering the Django app in ECR and deploying it to ECS
  • Part 5:
    • Setting up Auto Scaling, HTTPs routing & Serving Static Files from CloudFront

Contents

Overview

All of the coding is now completed for our application and the underlying networking infrastructure is all in place. We now need to deploy the services and persistent data stores that our app will use the infrastructure. In short, that comes down to deploying an RDS Postgres database, an Elasticache Redis instance and an ECS service that contains our Django application with an Nginx controller sitting in front.

All code to replicate this application can be found in the Tree Schema ECS Example GitHub repo

Deploying the Databases

I'm going to be deploying the Postgres database for RDS as well as the Redis cache manually. While this goes against some of the principles I previously laid out in the first article around reusability there are a few reasons why I will do this.

  • Creating a database and RDS cluster via CFT can quickly become complex and I want to show that using a hybrid CFT and console approach can work well for many use-cases
  • Changes to the database configuration will happen over time that are specific to a given environment, the database size may change, a read host may be added, etc. and it is easier, in my opinion, to manage these rare changes in the GUI for each environment than trying to keep a single CFT in sync between environments
  • Databases are often the most costly aspect of the infrastructure in AWS and I like to ensure that I have a high touch with any form of database changes

As a reminder, we do not want our databases to be accessed from anywhere on the internet, we only want the databases to be accessible from within our VPC and more specifically we only want our app security group and our bastion security group to be able to access it. Our database deployment will look like this:

ECS Logical Deployment
Database Deployment

Creating the Postgres Database

Creating the database is actually rather trivial, the only "gotcha" is that RDS databases need to be placed into a subnet group which is just that, groups of subnets that a database is eligible to be put into. When you create your database you will specify a subnet group and, in turn, AWS will know where to deploy your database.

First, navigate to "subnet groups" under the RDS console and create a new subnet group. After you choose your VPC you can select the availability zones within the VPC and finally you can select subnets that exist within the availability zones you created. In the first post we created availability zones in us-east-1a and us-east-1a therefore our subnet group will cover those two availability zones. Make sure you only select the ones in the private subnets since we do not want any external access!!

Inbound Security Group Access
RDS Subnet Creation

Now we can create the database. To further enhance security, we will also select the specific database security group that we previously created. This security group only allows incoming traffic from certain VPCs and from specific security groups within those VPCs. The important configurations for the networking are under Connectivity:

Inbound Security Group Access
Database Connectivity Configs

Everything else is going to be kept to a minimum for this example: micro sized database, burstable class, minimum storage, etc. Don't forget the password for your database! Once the database is created we'll be able to see the host:

Inbound Security Group Access
Database Host

Creating the Redis Instance

This will be slightly easier than creating the RDS instance but overall the steps are similar. Within the AWS Elasticache dashboard we first need to create a subnet group. Once again, only use the private subnets since this Redis instance won't be accepting any traffic from the public internet.

Redis Subnet Group
Elasticache Subnet Group

Now simply navigate to Redis and select create a new instance. I'm going to remove the cluster option so that this app only has a single server and no replicas and the node type will be set to the smallest possible size to reduce costs. Make sure you configure these properly for your app's needs. Finally, make sure your new subnet group and the existing database security group are selected. If you've been following along since the beginning, you will need these two networking selections to ensure your services can connect to Redis.

Inbound Security Group Access
Redis Configs

Once it is created you can access the host.

Inbound Security Group Access
AWS Redis Instance Overview

Testing the Database Connections

The first order of business is to create a jump server that will give us the ability to test that our database connectivity is working. The jump server, also often called a bastion server (although technically, these have different meanings), will be in our Bastion security group which will allow us to use an SSH tunnel to send traffic from our local machine to the database.

From the Ec2 home page, simply select launch new instance, choose the instance type that fits your needs (I'm using most current Amazon Linux 2 AMI size micro), change the subnet to one of your public subnets and choose to enable the publicly assigned IP.

Ec2 Network Selection
Ec2 Network Selection

You can hit next a few times until you're promoted to create a new security group or to reuse an existing one. Make sure to select "use existing" and then select your bastion security group. This is important because this is the only security group that will allow inbound traffic from the internet on port 22 and will be able to send traffic into the database security group.

Ec2 Security Group Selection
Ec2 Security Group Selection

Create your key and start the server. Once it spins up you will see the bastion security group under the "security" tab in the Ec2 details, click into it and edit the inbound rules. Give yourself SSH access from your current IP address.

Inbound Security Group Access
Allow Inbound SSH to Bastion Security Group

To test the connections we'll SSH into our jump server and run a basic query on Postgres and on each of the databases. Because the jump server has similar networking to what our app will have once it is deployed we can be confident that a successful test means that our app will work properly as well. First, SSH into your server, you'll need the public IP from the jump server you set up a few steps ago.

            
  ssh -i ~/.ssh/my-key.pem ec2-user@1.2.3.4
            
            

To test Redis, install the CLI (with your package manager) and run a ping against the redis host listed above. You will need to replace your host with the values for your instance.

              
  sudo yum install redis
  redis-cli -u redis://{YOUR_HOST}:6379/0 ping
  # PONG
              
              

You should see PONG as the output. If so, move on to testing Django. We will test the connection and, while we're here, also create the database that Django will use for the tables. Make sure that psql is installed on the server.

              
  sudo yum install postgresql
  psql -h {YOUR_HOST} -U {YOUR_USER}
              
              

You will be prompted for the password. After you log in you will be able to execute queries. Simply logging in validates the connection. Now, create the database. We will use the database name "django_ecs" for this app.

              
  postgres=> create database django_ecs;
  # CREATE DATABASE
              
              

AWS Secrets for Sensitive Information

When the app is running Django will need to know how to connect to the database so we need to provide the app with the host, username and password. Since we don't want to check the sensitive information into GitHub we'll create a secret in AWS with the information that is required for the connection. I'm also going to put some other information into the secrets including the Postgres host, the Django app secret (required for encryption), and other fields that I feel should not be checked into GitHub.

We will use these secrets shortly to inject the values directly into the ECS container as environment variables. These secrets are just stored as "other" in Secrets Manager and can be edited as JSON. When completing the secret setup I'm going to skip key rotations and all of the other configs that AWS provides for now and just save this as in in AWS.

Inbound Security Group Access
AWS Secrets Configs

Django Prod Configs

In the previous example we used two configuration files when running the app locally:

              
  ./ecs_example/config/settings/base.py
  ./ecs_example/config/settings/development.py
              
              

We will add one more specifically for our production deployment:

              
  ./ecs_example/config/settings/production.py
              
              

The production configuration will tell Django how to connect to our Postgres and Redis databases by reading in environment variables that will be passed in via the ECS container as well as from secrets passed in from AWS Secrets Manager. This new file will have the content below. Don't worry about where the DJANG_ECS_SECRETS environment variable is coming from at the moment - we will create that shortly.

Inbound Security Group Access
Django Production Configs

Create the Docker Containers

We're almost there! We just need to create a docker container for the Django app and also a docker container for our Nginx reverse proxy and add them to ECR. Once our containers are in ECR we can then deploy the services and use our app!

All of the commands below are run from the following directory within the ECS Example Repo:

              
  ./app/ecs_example
              
              
Django Container

Creating the docker container for our app is going to be quite easy. We just need to define a Dockerfile and then build and push it to ECR. We will add a "start" script to our Django container that will be executed each time that ECS creates a new instance of this container. Within this script we will start Django with gunicorn and we will also make sure that the tables in our database are synced with the models in our Django app by running all new Django migrations. This is slightly hacky, and not necessarily recommended for production, but works well for this use-case since our app is simple and it ensures every new version of our app that is deployed will have the correct tables.

The startup script is created in the following location:

              
  ./deployment/django/cmds/start
              
              

And has the following content. Notice that we are not creating migrations, only executing the migrations that were created as part of our development process.

Django Startup Script
Django Startup Script

Now that we have our startup command in place we just need to create the Dockerfile:

              
  ./deployment/django/Dockerfile
              
              

Inside of this Dockerfile add the contents below. Most of this is from the Django CookieCutter project but there are a few minor edits to copy our startup commands:

Django Dockerfile
Django Dockerfile

We can now build the container, tag it, and push it to ECR. We will name this container django_ecs_app. In order to interact with ECR we also need to login first. The following commands will handle all of these tasks. You will need your AWS account ID and the region your the region that you are deploying services into

            
  ACCOUNT_ID={YOUR_ACCT_ID}
  REGION={YOUR_AWS_REGION}
  C_NAME=django_ecs_app
  TAG=0.0.1
  
  # ECR login
  $(aws ecr get-login --no-include-email)
  
  # One time ECR repo creation
  aws ecr create-repository --repository-name $C_NAME

  # Build the container using the specified Docker file but pass 
  # in all of the context and files from the current location
  docker build --no-cache \
  -t $C_NAME:$TAG \
  -f ./deployment/django/Dockerfile .
  
  # Tag the image 
  docker tag $C_NAME:$TAG \
  $ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$C_NAME:$TAG

  # Push to ECR 
  docker push $ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$C_NAME:$TAG

            
            
Nginx Container

The Nginx container will be responsible for acting as our reverse proxy with Django (more specifically with the WSGI interface that sits right before Django). The nginx.conf used to support this app is rather simple. We will have one server block that points to the coupled Django application running on port 8000. This is defined in the upstream block.

Django Dockerfile
Nginx Config File

Now just build the Nginx container using a Dockerfile and push it to ECR with the same steps listed above, but to a different ECR repo name. The Dockerfile is straight forward:

Django Dockerfile
Nginx Dockerfile

Deploying the ECS Service via CFT

The final step is to create the ECS service and log into our app! There are a handful of additional resources that we need to deploy in order to accomplish this, all of which can be found in the deployment template.

  • An ECS task that runs both Django and Nginx
  • CloudWatch logs for the ECS task
  • An Application Load Balancer (ALB)
  • A target group
  • An HTTP routing rule
Template Parameters

Just as we did in the other parts of this tutorial, I am going to move some of the common values and other boilerplate values up into the parameters section. There isn't much that's interesting in this section but you will want to be careful that the name and tag value for the container you created above matches the values in your template. Here, we define the container name as well as the name for the corresponding ECS tasks and services for both the Django app and the Celert app.

Django Dockerfile
CFT Template Parameters
Variable Mappings

We're only going to be deploying this code to one environment, but the whole purpose of CFT is to enable you to be able to replicate this deployment across environments. In that regard, mappings are perfect for non-sensitive values that change based on the environment. For that reason I have depicted this CFT as having a dev and prod even though we will only be using the prod value for now.

One of the configs that I have in the mapping is loadbalancerCertArn. This tutorial is not walking through how to get a domain, register it, generate a cert and all that fun stuff. However, in the next article we will talk through all of the details about getting a cert setup from an existing domain. When you do deploy your app you will absolutely want to make sure you have a valid certificate so that your clients can use HTTPS.

Django Dockerfile
CFT Template Mappings
Log Group

To send native python logging to CloudWatch we'll need to configure a log group to receive the messages. The log group can be associated with one or more container definitions, therefore you could use one log group for both Nginx and Django or you could create a unique group for each.

Cloudformation Log Group
CFT Log Groups
ECS Task

The ECS task that we're deploying will run multiple containers - Django and Nginx. The most important thing with this deployment is that we use the networking mode awsvpc so that our containers can communicate with each other. As we saw in the Nginx configuration, this will allow us to route traffic between containers by using localhost and the corresponding ContainerPort.

Cloudformation ECS Task
CFT ECS Task

You can also see that the container definitions match the same repo locations in ECR where the images were pushed. The networking and roles are imported from a stack that was deployed in a prior article. The command for the Django app references the start script that was generated above, the Nginx container uses the default startup steps that are defined in the upstream image.

The final item here that is important to note is the Secrets attribute under the ContainerDefintion. The Name that has been provided matches will become an environment variable in our container and the ValueFrom should resolve to the ARN of the AWS secrets that we want to inject. When you retrieve secrets from AWS Secrets Manager it provides the secrets to you in a JSON format. The same principle applies here. AWS will inject a serialized version of your secrets into this environment variable. As a quick reminder, this is how we are accessing these secrets within the Django configs:

                
  # Load the secrets to a dictionary
  DJANG_ECS_SECRETS = json.loads(env("DJANG_ECS_SECRETS"))

  # Access values from the secrets
  SECRET_KEY = DJANG_ECS_SECRETS['DJANGO_SECRET_KEY']
              
              
ECS Service

The ECS task by itself will not run forever, we need to have an ECS service that creates the task and keeps it alive. The service is also going to be responsible for determining how many instances of the app are running at a time, what cluster it is deployed to as well as the networking configurations. The ECS service will have an application load balancer (ALB) that will be able to route traffic to one of the containers that are running within the service.

Cloudformation Log Group
CFT ECS Service
Application Load Balancer

The service above referenced a load balancer, we need to create that now. Notice that the type is "application". The load balancer also needs to be placed into the public subnet for our VPC and it needs to have open access to the internet on ports 80 (http) and 443 (https). We previously defined those subnets and security group rules in the first article and those resources will be referenced here.

Cloudformation Application Load Balancer
CFT Application Load Balancer
HTTP Listener

When a load balancer receives a request it needs to know what to do with that request. ALBs are able to "listen" on multiple ports and to take different actions depending on the port that the traffic was received on. For this initial setup the ALB will only be listed on port 80 and it will route all HTTP traffic to a target group - behind that target group will be our application. In the next article we'll want to set up an HTTPs listener and route all HTTP traffic to HTTPS and then have the HTTPS listener forward into the target group. But for now, since we don't have a cert for HTTPs we'll just use HTTP:

Cloudformation Application Load Balancer
CFT HTTP Listener
Target Group

The target group is like a mapping between the load balancer and, well, a target. Our target will be the ECS service. If you remember above the target group is defined as part of the LoadBalancer section of the ECS service. The target group has another really nice feature in that it will monitor the health of your application. To do this you'll need to have a health check endpoint and you'll need to return a 200-299 HTTP code. I threw in a quick endpoint in the Django app under /health/ that this target group will use. Should the result timeout or return not healthy for more than 2 consecutive invocations the target group will remove and replace the container that received the health check.

Cloudformation Target Group
CFT HTTP Listener
Deploy the template

You can deploy the template from the templates section with a command similar to:

                
 sam deploy \
 -t django-app.yaml \
 --stack-name django-ecs-app \
 --capabilities CAPABILITY_IAM   
                
                

You can now check out your app! If you go into the AWS Ec2 service and then navigate to load balancers you can pick out the URL for your newly created application load balancer. The URL should be under DNS Name and end in .com, so you can't miss it.

Djagno App Running on ECS
The App Running on ECS!

The URL will be listed as unsecure since we're not rerouting to HTTPs and the app won't actually work yet because we are not yet serving the javascript files that the front-end requires but we'll fix both of those in the next article.

Closing Thoughts

We have a basic shell of our app running in ECS! This is a big win. There are also a handful of minor items that we need to fix before this is ready for a production-like use. Plus there is the Celery service. We're close to getting this across the endzone, in the final tutorial we'll wrap up this project and get everything running together.


Share this article:

Like this article? Get great articles direct to your inbox