From Monolithic to MicroServices

In the new version 6 of Architecting on AWS, released in October 2018, there is a discussion on Docker, microservices and ECS.

There is an awesome workshop here about breaking a monolithic Docker application into microservices. The application hosts a simple message board with threads and messages between users.

The workshop is Intermediate level with a time to complete of 2 hours.

It may not be the ideal way to get started with ECS and Docker, on the other hand it is achievable even if you have never used either, and you will learn a lot along the way. It will probably take you longer than two hours.

To get started with Docker I recommend Docker Basics for Amazon ECS, which walks through installing Docker on an AMZ Linux instance, creating a Hello World Docker Image from a supplied Dockerfile, and pushing it to the AMZ Container Registry.

To get started with ECS the Getting Started wizard is the best way. At least you would think so. The problem is this:

ECS offers two modes: EC2 mode, and Fargate mode. With ECS mode the containers run on ECS optimized EC2 instances in an AutoScaling Group, and that is the best mode to start learning about ECS. However, at the time of writing Fargate is being rolled out to more regions.  Fargate moves the AutoScaling Group and instances to a managed control plane, so you do not have visibility into the AutoScaling Group or the instance. That’s fine and that’s the whole point of Fargate but this project does not use Fargate. When Fargate is rolled to a new region, the ECS Getting Started wizard in that region changes from EC2 mode to Fargate mode, with no option to change it. In real life that’s not a problem a you wouldn’t use the Getting Started Wizard as that’s just for learning.

Going back to the workshop, once you start looking at the pre-requisites in the instructions, you may be put off. First it assumes you will install Docker and the AWS CLI locally on a Mac or Windows and links are provided. However, you can use an AMZ Linux instance instead, instructions here.

They also mention Atom as a popular open-source text editor. Good as it is, you don’t need it for this project. In fact you don’t edit any files, you just download the project files and look at them. I downloaded the project files to my local machine and then used WinSCP to copy the files to the Linux instance.

After that, the workshop worked as written, although given the fast evolving ECS GUI there will be some minor differences.

At the end of the workshop, we test our Microservices based application by navigating to the DNS name of our Load Balancer and appending a path:

The ALB routes this path to a particular target group, and on to a container running on one of the EC2 instances in an ECS Cluster.

As always, the last step involves a clean up. Personally I like to keep things around for a while for demos so until I automate the whole thing I save costs by scaling down the ECS services to 0 tasks, scaling the Cluster size to 0 instances, and deleting the ALB.

 

 

CPU architecture: AMD64 Clarified

As covered in Technical Essentials and Architecting, EC2 instances run on Intel Hardware.

However, one of the first things you see when you deploy a Windows instance is this:

This link explains:

(AMD) introduced the first commercially successful 64-bit architecture based on the Intel x86 instruction set. Consequently, the architecture is widely referred to as AMD64 regardless of the chip manufacturer.

 

IPv6 in your VPC

Nothing to do with the course or the exam, but just for fun I configured IPv6 in my VPC.

As a starting point I used a completed lab of Architecting on AWS. The final lab is as follows:

So its the classic 2 public, 2 private subnets across 2 AZs, with an ALB handling incoming traffic to the Web App. To test the incoming traffic, we browse to the DNS name of the ALB. To test the Nat Gateways, the Web App is querying an internet site freegeoip for its coordinates. The public IP we see is the EIP of one of the Nat Gateways.

Copying the coordinates (removing the “/”) into Google Maps shows that the IP is located in the middle of a canal in Dublin.

Which just happens to be about 200m from Amazon Ireland.

Lets change to using ipv6 for this outgoing traffic.

VPC>Actions>Edit CIDRs>Add ipv6 CIDR

We get a CIDR range for the VPC.

2a05:d018:1d6:1e00::/56

Its always a /56, which means we have 8 bits to assign a unique prefix to each subnet, which means in theory we can have 256 subnets, at least according to the IPv6 rules.

The addresses are from a public range, as is usual with IPv6.

For each Subnet:

Actions>Edit IPv6 CIDRs>Add IPv6 CIDR. You will be presented with something like this:

2a05:d018:1d6:1e00::/64

and you are prompted to enter the last 2 digits of the prefix, that is the bit before the double colon. This uniquely identifies each subnet, and you could use the range 00-03 for the 4 subnets.

For each EC2 instance:

Actions>Networking>Manage IP Addresses>Assign IPv6 Address

You get something like this:

2a05:d018:1d6:1e03:10b4:f418:a79:76ec

The last 64 bits is the auto-generated interface ID.

According to the rules of IPv6 we could have 2 to the 64 addresses in the subnet.

For the Load Balancer:

Actions>Edit IP address type>dualstack

Now the LB has an IPv4 and an IPv6 stack.

For the route tables associated with the private subnets, add a route ::/0 and choose the internet gateway as the target.

The security groups need to allow outbound traffic. In this case mine was already wide open, it had a rule ALL Traffic ::/0.

Refresh the web page. it worked!

The IPv6 address we see is now the IPv6 address of the EC2 instance.

We no longer need the Nat gateways, so we can save about 8 pence an hour!

Now the private subnets are no longer private as far as IPv6 is concerned. To increase security we could use an “Egress Only Internet Gateway” or tighten up the security groups.

 

Step Functions

The “Architecting on AWS” started covering Step Functions in October 2018.

The course labs do not include step functions, but there is a good “10 Minute” tutorial here.

Step Functions is a serverless orchestration service that lets you easily coordinate multiple Lambda functions into flexible workflows that are easy to debug and easy to change.

The workflows you build with Step Functions are called state machines, and each step of your workflow is called a state.

All work is done by task states, which can be Lambda functions

The tutorial creates a step function which is presented visually

This state machine uses a series of Task states to open, assign and work on a support case. Then, a Choice state is used to determine if the case can be closed or not. Two more Task states then close or escalate the support case as appropriate.

The above image shows an execution of the workflow where the support case was escalated, causing the workflow to exit with a Fail state.

In a real world scenario, you might decide to continue working on the case until it is resolved instead of failing out of your workflow. To do that, you could remove the Fail state and edit the Escalate Case Task in your state machine to loop back to the Work On Case state. No changes to your Lambda functions would be required