AWS Fargate

AWS Fargate is covered with one slide in the course “Architecting on AWS”. The course notes say:

AWS Fargate is a technology for Amazon ECS and Amazon Elastic Container Service for Kubernetes (Amazon EKS) that allows you to run containers without having to manage servers or clusters. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. AWS Fargate eliminates the need for you to interact with or think about servers or clusters. Fargate lets you focus on designing and building your applications instead of managing the infrastructure that runs them.

To get started with Fargate, you can follow this walkthrough.

The ECS Console may be subject to more frequent change than for other services, so bear that in mind. This post was written on 04/03/20.

As a pre-requisite, you should be familiar with:

  • EC2 Autoscaling
  • Application Load Balancer

I assume you have covered some theory on docker, ECS and Fargate.

Navigate to ECS, Clusters, Get Started.

In the container definition section, there is a choice of images to use.  You could use all the defaults, and then you would end up with a single task, no load balancer, and a “Welcome to ECS” web page.

Instead, I will do a few customisations.

Choose the custom image, and click Configure

In the Image field, carefully type nginxdemos/hello

This refers to an image in docker hub that displays a web page including the containers IP address, which is useful for load balancer testing.

In port mappings, for Container Port, type 80, then click Update

Each task will get an ENI, with its own IP, accepting traffic on port 80. An Application Load Balancer will load balance across each IP.

In the Define Your Service section, click Edit, and change the number of desired tasks from 1 to 2. This will give us more high availablity. Choose Application Load Balancer, Click Save, then Next

Give a name for the cluster eg mycluster, click Next, Create

It will take up to 10 minutes to create all the resources, but probably a lot less.

It is using Cloud Formation in the background. As it builds, click on the resources as it creates them to understand what it is doing.

It creates the following:

  • A Cluster, with 2 running tasks
  • A Task Definition, which includes the name of the image that we supplied
  • A Service with 2 running tasks
  • A CloudWatch log group where you will see HTTP requests to the instance as they are being health checked by the ALB
  • A CloudFormation stack which ECS is using right now to create the resources
  • A VPC and two subnets
  • A Security Group allowing the ALB to send requests to the tasks

When it completes, go to EC2, Target Groups to see a new TG with 2 instances, both healthy, each in a different AZ.

Go to EC2, Load Balancers, copy and paste the DNS name of the ALB, and paste it into a new browser tab to see the NGINX demo page. Refresh the browser to see the load balancing. The demo page displays the private IP and DNS name of the task.

Go to ECS, Clusters and click on the name of the cluster. There is 1 service with 2 desired and 2 running tasks.

Click on the Tasks tab to see the 2 running tasks.

Select one of them and click Stop to simulate a failure. It will be replaced to satisfy the desired task number of 2 tasks.

Click on the ECS instances tab. There are none, that’s the point of Fargate mode!

Click on the Services tab, then click on the name of the service, then click on the Events tab to see the log of events when ECS started and registered the tasks with the ALB.

Now to set up Auto Scaling:

Go to ECS, Clusters, click on the name of the cluster, then the service, and click on the Auto Scaling tab.

Click on Update (top right of page), Next, Next until you get to the Service Auto Scaling page.

Choose Configure Service Auto Scaling to adjust the service desired count. Choose minimum 1, desired 2, maximum 4.

Add Scaling Policy, leave the scaling policy type at Target Tracking, give the policy a name eg mypolicy.

Choose ALBRequestCountPerPage and enter a target value of 1. Click Save, Next Step, Update Service

This policy will most easily allow us to demonstrate AutoScaling for this test.

Go to CloudWatch to see a target tracking alarm in the Insufficient Data state.

On the NGINX demo page, tick the AutoRefresh tick box. It will cause the browser to send regular requests to the ALB.

After a bit longer than 3 minutes, the alarm should go into the alarm state.

Go the ECS service, click on the Events tab to see that ECS has increased the desired count and will have started up to 4 tasks

Go to the NGINX demo page to see it load balancing across the tasks.

If you want, clear the AutoRefresh so that ECS will eventually scale in. The scale in metric is looking over a longer period of 15 minutes, and there is also a de-registration delay (aka Connection Draining) of 300 seconds by default, so it will take a long time to scale in.

To clean up, go the cluster and click Delete Cluster. It will delete everthing that was created.

 

 

From Monolithic to MicroServices

In the new version 6 of Architecting on AWS, released in October 2018, there is a discussion on Docker, microservices and ECS.

There is an awesome workshop here about breaking a monolithic Docker application into microservices. The application hosts a simple message board with threads and messages between users.

The workshop is Intermediate level with a time to complete of 2 hours.

It may not be the ideal way to get started with ECS and Docker, on the other hand it is achievable even if you have never used either, and you will learn a lot along the way. It will probably take you longer than two hours.

To get started with Docker I recommend Docker Basics for Amazon ECS, which walks through installing Docker on an AMZ Linux instance, creating a Hello World Docker Image from a supplied Dockerfile, and pushing it to the AMZ Container Registry.

To get started with ECS the Getting Started wizard is the best way. At least you would think so. The problem is this:

ECS offers two modes: EC2 mode, and Fargate mode. With ECS mode the containers run on ECS optimized EC2 instances in an AutoScaling Group, and that is the best mode to start learning about ECS. However, at the time of writing Fargate is being rolled out to more regions.  Fargate moves the AutoScaling Group and instances to a managed control plane, so you do not have visibility into the AutoScaling Group or the instance. That’s fine and that’s the whole point of Fargate but this project does not use Fargate. When Fargate is rolled to a new region, the ECS Getting Started wizard in that region changes from EC2 mode to Fargate mode, with no option to change it. In real life that’s not a problem a you wouldn’t use the Getting Started Wizard as that’s just for learning.

Going back to the workshop, once you start looking at the pre-requisites in the instructions, you may be put off. First it assumes you will install Docker and the AWS CLI locally on a Mac or Windows and links are provided. However, you can use an AMZ Linux instance instead, instructions here.

They also mention Atom as a popular open-source text editor. Good as it is, you don’t need it for this project. In fact you don’t edit any files, you just download the project files and look at them. I downloaded the project files to my local machine and then used WinSCP to copy the files to the Linux instance.

After that, the workshop worked as written, although given the fast evolving ECS GUI there will be some minor differences.

At the end of the workshop, we test our Microservices based application by navigating to the DNS name of our Load Balancer and appending a path:

The ALB routes this path to a particular target group, and on to a container running on one of the EC2 instances in an ECS Cluster.

As always, the last step involves a clean up. Personally I like to keep things around for a while for demos so until I automate the whole thing I save costs by scaling down the ECS services to 0 tasks, scaling the Cluster size to 0 instances, and deleting the ALB.