Building on AWS

There is free on-demand training from AWS instructors here.

Most of my posts cover simple walk throughs which can be completed in less than an hour. This one is about a longer project. I recommend attending one of the associate level instructor led training courses to gain some knowledge before tackling it.

It is aimed at developers, but may also be of interest to systems administrators or architects. It is intended to be completed over several sessions, and consists of about 24 hours of videos and labs. It uses Python Flask for the example application, and all the code is supplied.

You build the architecture shown in the diagram above.

  • The user signs in using Cognito
  • Uploads a photo to S3
  • S3 sends a message to SNS
  • SNS triggers a Lambda function
  • Lambda uses Rekognition to label the photo
  • The labels are stored in RDS
  • SNS also sends a message to SQS
  • SQS demonstrates a user case where we have an on-premises photo printing service
  • Trace data is sent to X-Ray

One challenge with following on-demand training, especially with rapidly evolving technologies, is that labs can fail for all sorts of reasons. My organization delivers both classroom and virtual instructor led training, where such problems are always quickly resolved. In this project, at the time of writing (April 2020) all the labs worked.

This is a screenshot of the working app:

 

AWS Fargate

AWS Fargate is covered with one slide in the course “Architecting on AWS”. The course notes say:

AWS Fargate is a technology for Amazon ECS and Amazon Elastic Container Service for Kubernetes (Amazon EKS) that allows you to run containers without having to manage servers or clusters. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. AWS Fargate eliminates the need for you to interact with or think about servers or clusters. Fargate lets you focus on designing and building your applications instead of managing the infrastructure that runs them.

To get started with Fargate, you can follow this walkthrough.

The ECS Console may be subject to more frequent change than for other services, so bear that in mind. This post was written on 04/03/20.

As a pre-requisite, you should be familiar with:

  • EC2 Autoscaling
  • Application Load Balancer

I assume you have covered some theory on docker, ECS and Fargate.

Navigate to ECS, Clusters, Get Started.

In the container definition section, there is a choice of images to use.  You could use all the defaults, and then you would end up with a single task, no load balancer, and a “Welcome to ECS” web page.

Instead, I will do a few customisations.

Choose the custom image, and click Configure

In the Image field, carefully type nginxdemos/hello

This refers to an image in docker hub that displays a web page including the containers IP address, which is useful for load balancer testing.

In port mappings, for Container Port, type 80, then click Update

Each task will get an ENI, with its own IP, accepting traffic on port 80. An Application Load Balancer will load balance across each IP.

In the Define Your Service section, click Edit, and change the number of desired tasks from 1 to 2. This will give us more high availablity. Choose Application Load Balancer, Click Save, then Next

Give a name for the cluster eg mycluster, click Next, Create

It will take up to 10 minutes to create all the resources, but probably a lot less.

It is using Cloud Formation in the background. As it builds, click on the resources as it creates them to understand what it is doing.

It creates the following:

  • A Cluster, with 2 running tasks
  • A Task Definition, which includes the name of the image that we supplied
  • A Service with 2 running tasks
  • A CloudWatch log group where you will see HTTP requests to the instance as they are being health checked by the ALB
  • A CloudFormation stack which ECS is using right now to create the resources
  • A VPC and two subnets
  • A Security Group allowing the ALB to send requests to the tasks

When it completes, go to EC2, Target Groups to see a new TG with 2 instances, both healthy, each in a different AZ.

Go to EC2, Load Balancers, copy and paste the DNS name of the ALB, and paste it into a new browser tab to see the NGINX demo page. Refresh the browser to see the load balancing. The demo page displays the private IP and DNS name of the task.

Go to ECS, Clusters and click on the name of the cluster. There is 1 service with 2 desired and 2 running tasks.

Click on the Tasks tab to see the 2 running tasks.

Select one of them and click Stop to simulate a failure. It will be replaced to satisfy the desired task number of 2 tasks.

Click on the ECS instances tab. There are none, that’s the point of Fargate mode!

Click on the Services tab, then click on the name of the service, then click on the Events tab to see the log of events when ECS started and registered the tasks with the ALB.

Now to set up Auto Scaling:

Go to ECS, Clusters, click on the name of the cluster, then the service, and click on the Auto Scaling tab.

Click on Update (top right of page), Next, Next until you get to the Service Auto Scaling page.

Choose Configure Service Auto Scaling to adjust the service desired count. Choose minimum 1, desired 2, maximum 4.

Add Scaling Policy, leave the scaling policy type at Target Tracking, give the policy a name eg mypolicy.

Choose ALBRequestCountPerPage and enter a target value of 1. Click Save, Next Step, Update Service

This policy will most easily allow us to demonstrate AutoScaling for this test.

Go to CloudWatch to see a target tracking alarm in the Insufficient Data state.

On the NGINX demo page, tick the AutoRefresh tick box. It will cause the browser to send regular requests to the ALB.

After a bit longer than 3 minutes, the alarm should go into the alarm state.

Go the ECS service, click on the Events tab to see that ECS has increased the desired count and will have started up to 4 tasks

Go to the NGINX demo page to see it load balancing across the tasks.

If you want, clear the AutoRefresh so that ECS will eventually scale in. The scale in metric is looking over a longer period of 15 minutes, and there is also a de-registration delay (aka Connection Draining) of 300 seconds by default, so it will take a long time to scale in.

To clean up, go the cluster and click Delete Cluster. It will delete everthing that was created.

 

 

Route 53 Alias Records

From the AWS docs:

Amazon Route 53 alias records provide a Route 53–specific extension to DNS functionality. Alias records let you route traffic to selected AWS resources, such as CloudFront distributions and Amazon S3 buckets. They also let you route traffic from one record in a hosted zone to another record.

A frequently asked question about Route 53 is Alias and CNAME records.

Either of these can be used when assigning domain names to AWS resources such as an ALB.

To test this I am using an ALB. I set up a simple scenario, here are the prerequisites:

  • A web server instance using port 80 in a public subnet.
  • A Route 53 hosted zone.

Create an ALB and target group. Register the instance with the target group. Copy and paste the DNS name of the ALB, (which looks like albname.xxxx.eu-west-1.elb.amazonaws.com) into a browser tab to verify that it works.

In Route 53, create a record set eg www.thetrainit.com. Choose “A” record from the drop down and choose Alias=Yes. In the Alias target, choose the name of the ALB from a dropdown.

Test access from a browser. It should work.

The clients local DNS server will be sending a query for www.thetrainit.com. Route 53 will find that it is mapped to the DNS name of the ALB and return the IP of the ALB.

Now to test using a record type of CNAME and Alias=No.

Delete the Route 53 record set and create a new one.

This time, choose CNAME from the drop down and Alias=No.

Copy and paste the DNS name of the ALB into the Value field.

Test it from a browser. It should also work.

The above test also works when using a Classic Load Balancer (still featured in the Architecting Professional exam at the time of writing (February 2020).

Now to explain why a record of type A and Alias=Yes is recommended:

When using the CNAME record, the clients LDNS server will query for www.thetrainit.com. Route 53 will return the CNAME value albname.xxxx.eu-west-1.elb.amazonaws.com. The LDNS server will send a second query for that, and AWS will return the IP of the ALB.

When using the record of type “A” and Alias=Yes, there is no second query. The browser will query for www.thetrainit.com and the IP will be returned.

In both cases, AWS ensures the returned IP is of a healthy ALB node.

The benefit of the type “A”, Alias=Yes record:

  • There is no second query.
  • An “A” Alias record can map to the zone apex (eg thetrainit.com) but CNAME cannot.
  • Queries to the “A” Alias record are free.

Further information:

When you choose Alias=No, you can choose a TTL. The default is 300 seconds. When you choose Alias=Yes there is no TTL field. It is 60 seconds. This is because the IP of the ALB is subject to change, for example if an ALB node fails and is replaced.

If using an ALB in a different account to Route 53, you will not see the resource in the drop down in the Value field. Instead, you can copy and paste the DNS name of the ALB.

 

Simulated Slot Machine Browser Game

In the “Architecting on AWS” course there is a slide showing how the AWS SDK for JavaScript can be used to allow a client side script to invoke a Lambda function.

In this example, a simulated slot machine browser-based game invokes a Lambda function that generates the random results of each slot pull, returning those results as the file names of images used to display the result. The images are stored in an Amazon S3 bucket that is configured to function as a static web host for the HTML, CSS, and other assets needed to present the application experience.

The example is taken from:

Tutorial: Creating and Using Lambda Functions

The tutorial is mainly about using the SDK for JavaScript and so is outside the scope of an Architecting course. However, it is a very interesting project. There are SDKs for IOS, Android and JavaScript. The SDK for JavaScript is for desktop and other web browsers.

I would recommend the following as pre-requisite knowledge, all of which are covered in the course:

  • Lambda
  • Assigning Roles to Lambda
  • S3 Static Web Sites
  • Dynamo DB
  • Cognito
  • Use of Access Keys

The tutorial uses a config.json file configured with suitable credentials (an access key id and secret access key) to give permission to the supplied Node.js scripts to create the resources for the project. In the Architecting course we talk about the use of passwords for Console access, and access keys for the use of CLI and SDK. So this is an example of using keys along with the AWS SDK.

The tutorial assets are downloaded from GitHub. Most of the tutorial involves using supplied Node.js scripts to create and configure the resources.

At the end of the tutorial, you click on the S3 Static Web site URL:

Clicking on the red handle spins the wheels and invokes the Lambda function, which selects images to display at random. The names of the images come from a Dynamo DB table, and the actual images from S3.

Pre-requistes for the tutorial are:

  • Install Node.js on your computer to run various scripts that help set up the resources
  • Install the AWS SDK for JavaScript on your computer to run the setup scripts.

If you don’t want to install these on your computer you could use Cloud9

Its fairly easy to get started with Cloud9, and it already has Node.js installed.

The tutorial guides you through the following:

Create an Amazon S3 bucket to store all the browser assets. These include the HTML file, all graphics files, and the CSS file. The bucket is configured as a static website.

The JavaScript code in the browser, a snippet of which you see in the slide above, needs authentication to access AWS services. Within web pages, you typically use Amazon Cognito Identity to do this authentication.

The Architecting course covers Cognito at a very high level. This is the course slide on Cognito:

Elsewhere I have a post about another project called “WildRydes”. That project also used a browser and a Lambda function, but the browser made a call to API Gateway which then invoked Lambda. The browser gained permission to invoke API Gateway by getting a token from a Cognito User Pool after signing up and logging in and then providing that token to API gateway. That scenario did not require Identity Pools.

In this tutorial, Cognito User Pools are not used. Instead, Identity Pools are used. In this tutorial, the user dosn’t have to login. Identity Pools can support unauthenticated identities by providing a unique identifier and AWS credentials for users who do not authenticate with an identity provider. This is typically used to allow Guest access to an application. The identity pool is configured to allow the browser to assume a Role with permissions to invoke the Lambda function. The javascript running within the browser supplies a Cognito Identity Pool Id.

Next we create an execution role for the Lambda function.

Next we create and populate the Dynamo DB table.

Next we edit the supplied Lambda function to reference the role and the bucket, create a zip file of the Lambda function and upload it to the Lambda service.

Finally, we access the S3 static web site URL to test the application.

Some “gotchas”:

Most of the tutorial uses Node.js scripts to create, configure and populate resources, but sometimes the required command is not explicitly supplied, because the command can vary depending if you are using Linux, Mac, Windows. For example, once we have edited the Lambda function to customise it, to create a zip file of the Lambda function:

zip slotpull.js.zip slotpull.js

Also, the instructions to copy some of the assets to the S3 bucket were missing. I downloaded the assets to my local windows machine, and then used the S3 console to copy the relevant files.

The tutorial dosn’t ask you to change the region in the various scripts. You can use any region as long as you are careful to spot any references to a region and edit the files.

To tear down, delete all the resources as usual. Alternatively, as it is all serverless, you could leave it in place.

 

 

Global Accelerator Part 2

In November 2019 AWS launched a new 5 day course “Architecting on AWS Accelerator” which includes content from the “Advanced Architecting on AWS” course.

It covers Global Accelerator, so I am revisiting it for the scenario where it is used in front of Load Balancers.

The above slide shows the current model without Global Accelerator.

The problem with it is caching.

Say a user browses to www.example.com and is returned the IP of the LB in us-east-1. Then the LB in us-east-1 goes down for some reason. It will be replaced and its IP will change. The old IP has a high likelihood of being cached either locally or at a DNS cache.

The user browses to www.example.com

When we configure Global Accelerator we receive 2 static IPs.

We create a record set in Route 53 with these 2 IPs.

The browser will try to make a connection to one of the IPs.

Global Accelerator will typically direct the user to the LB nearest the user. In addition, the user will be routed via the nearest edge location, and from their the traffic will traverse the AWS network rather than the internet.

From the Global Accelerator home page:

To improve the user experience, AWS Global Accelerator directs user traffic to the nearest application endpoint to the client, thus reducing internet latency and jitter. It routes the traffic to the closest edge location via Anycast, then by routing it to the closest regional endpoint over the AWS global network. AWS Global Accelerator quickly reacts to changes in network performance to improve your users’ application performance.

Global Accelerator is also using health checks. In this scenario, rather than use its own health checks, it relies on the health checks of the LB.

To set this scenario up:

As a prerequisite I set up the following:

An instance in us-east-1 with a simple web page identifying its region eg “Hello from us-east-1”

An ALB in us-east-1 targeting the single instance.

I repeated the above for eu-west-1.

I tested access to the two ALB’s by browsing to their DNS names.

Now for the Global Accelerator part.

From the Global Accelerator console, create a Global Accelerator. Give it a name.

Add a Listener on TCP/80.

Add an Endpoint Group, choose us-east-1 from a drop down. Repeat for eu-west-1.

Add Endpoint for us-east-1, of type ALB, and choose the ARN of the ALB from a drop down.

Repeat for eu-west-1.

It takes about 5 minutes to deploy, and you receive 2 static IPs.

Put either one of them in a browser. You should see the web page of the instance in your closest region.

Stop that instance.

Wait until the target group indicates that the instance is unhealthy.

Refresh the browser. You should see the web page of the instance in the other region.

If you have a Route 53 hosted zone, create a record set. I used ga.thetrainit.com. Add the 2 static IPs of the GA.

Browse to ga.thetrainit.com to test it.

Start the stopped instance, wait for the target group to indicate that it is healthy and test again.

Some further notes:

If I do an nslookup on ga.thetrainit.com it returns the two IPs in a fairly unpredictable order.

From the FAQ:

If one static IP address becomes unavailable due to IP blocking, or unreachable networks, AWS Global Accelerator will provide fault tolerance to client applications by rerouting to a healthy static IP address from the other isolated network zone….With AWS Global Accelerator there is no reliance on IP address caching settings of client devices. The change propagation time is a matter of seconds, thereby reducing downtime of your applications.

To clean up:

Disable the Accelerator, which only took a few seconds for me.

Delete the Accelerator.

Delete the Route 53 record set.

In each region, delete the ALB, Target Group, and terminate the instance.

Some snippets from the FAQ

Q: How does AWS Global Accelerator work with Elastic Load Balancing (ELB)?

If you have workloads that cater to a global client base, we recommend that you use AWS Global Accelerator. If you have workloads hosted in a single AWS Region and used by clients in and around the same AWS Region, you could use an Application Load Balancer or Network Load Balancer to manage such resources.

Q: How is AWS Global Accelerator different from Amazon CloudFront?

Global Accelerator and CloudFront are two separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable (e.g., images and videos) and dynamic content (e.g., APIs acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in a single or multiple AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT) or Voice of IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic fast regional failover. Both services integrate with AWS Shield for DDoS protection.

Macie Part 2

Revisiting the Macie console after leaving it running for a month or so, I see the locations from which my account has been accessed. As it happens, these are the locations I have been working in recently.

From the docs:

AWS CloudTrail provides you with a history of AWS API calls for your account, including API calls made using the AWS Management Console, the AWS SDKs, the command line tools, and higher-level AWS services. AWS CloudTrail also enables you to identify which users and accounts called AWS APIs for services that support CloudTrail, the source IP address that the calls were made from, and when the calls occurred.

X-Ray

In the “Security Engineering on AWS” course there are two slides on X-Ray.

From the course notes:

AWS X-Ray is a service that collects data about requests that your application serves, and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. For any traced request to your application, you can see detailed information not only about the request and response, but also about calls that your application makes to downstream AWS resources, microservices, databases and HTTP web APIs.

There is a very cool sample application to get started.

From the X-Ray console, click “Launch a sample application (Node.js)”

It will take a few minutes. It uses Cloud Formation to create an Elastic Beanstalk application, and Elastic Beanstalk creates a second stack.

When both stacks are complete, in the Elastic Beanstalk click on the Elastic Beanstalk application URL to see the sample application:

Click on the sign-up form and supply a Name and Email address (it dosn’t have to be real). This will be a POST request to the application.

In the X-Ray console, click on “Service Map”, to see a screen similar to this:

The leftmost circle represents the HTTP requests to the web page. The other circles are:

  • A call to the metadata service to retrieve the security credentials supplied by the Instance Profile, used to make the Dynamo DB and SNS API calls.
  • A Dynamo DB table. The application is writing the users details to a Dynamo DB table. If you go to the Dynamo DB console you will see it.
  • An SNS Topic. The application is publishing the users details to an SNS topics. If you go the SNS console, you will see an unconfirmed subscription to the email address you used, and a confirmed subscription from an SQS queue.

In the X-Ray console, click “Traces” to see the URLs accessed by the client.

It will include the initial GET to the web application and a POST to the signup page.

You may also see a GET to the favicon, with a 404 not found.

“A favicon /ˈfæv.ɪˌkɒn/ (short for favorite icon), also known as a shortcut icon, website icon, tab icon, URL icon, or bookmark icon, is a file containing one or more small icons, associated with a particular website or web page.”

Drill down to the details of the trace for the signup page.

You see a timeline including the total time for the POST.

This is broken down into the DynamoDB putitem followed by the SNS Publish with the times for each.

You may also see the calls to the metadata service to retrieve security credentials.

Back in the application, click “Start” and leave it for 2 minutes to make about 10 automated signups per minute. Now we start to see average figures for each of the circles in the service map.

The application intentionally includes signups with a duplicate email address, which causes Dynamo DB to return a 400 error, and the POST to return a 409 error. These errors can be seen in the traces.

“An HTTP 400 status code indicates a problem with your request, such as authentication failure, missing required parameters, or exceeding a table’s provisioned throughput”

Just for fun, I removed the sns:publish permission from the policy attached to the Role that the instance is using.

The service map starts to display orange circles, and you can drill down the traces to see the detail:

AuthorizationError: User: arn:aws:sts: <output ommitted> is not authorized to perform: SNS:Publish on resource: arn:aws:sns:<output omiited>

The POST returns a 500 error.

In summary, X-Ray is helping us to indentify both latency issues and intermittent errors returned by a service.

To clean up, delete the X-ray cloud formation stack, which will in turn delete the Elastic Beanstalk stack.

 

 

 

EC2 Auto Scaling Purchasing Options

In the “Architecting on AWS” course there is a slide on Autoscaling Purchasing Options.

From the course notes:

Amazon EC2 Auto Scaling supports multiple purchasing options within the same Auto Scaling group (ASG). You can include Spot, On-Demand, and Reserved Instances (RIs) within a single ASG, allowing you to save up to 90% on compute costs.

The following walk through only takes a few minutes to try.

I keep it minimal to demonstrate the basic features and get started quickly, leaving you to try out the many further options.

In order to demonstrate using a mixture of On-Demand and Spot pricing, I will use a steady state Auto Scaling Group of 4:4:4, that is Minimum, Maximum and Desired all of 4. I aim to have a 50%/50% mix of On-Demand and Spot, that is 2 of each.

Another options is to have always have a certain number of On-Demand Base instances as part of the mix, but I will leave that Base number at zero.

When configuring Launch Templates and Auto Scaling Groups there is no reference to Reserved Instances. That part is automatically dealt with by matching your Reserved Instance choices, if any, with your actual running instances across the account, and then billing at the Reserved Instance price.

Everything that follows is done from the EC2 console.

This use case requires the use of a Launch Template.

Create a Launch Template. Give it a name. At a minimum, when the Launch Template will be used by an ASG, you must supply and AMI and Instance Type. I choose an AMZ Linux 2 AMI and t2.micro Instance Type.

Leave all rest as defaults. Note that expanding Advanced Details shows that you can request Spot instances here, but you must not tick this if the use case is to have a mix of On-Demand and Spot within and ASG.

Create an Auto Scaling Group and select the Launch Template. Select “Combine purchase options and instances”.

For Instance Distribution, clear “Use the default” and then choose “On-Demand Percentage” above base=50%, leaving everything else at default.

Choose to  Start with 4 instances. This is the desired number to start with. If you use Auto Scaling policies, then when scaling in and out, this the staring desired number and then it changes dynamically.

Select a subnet. In production it would be recommended to use more than one.

On the next screen, choose “No scaling policies”. To keep it simple, and to avoid needing to run some kind of simulated stress, we will keep it to a steady state group of 4 instances.

After a few seconds, you should see 4 instances launching.

We will want to verify the mix of On-Demand and Spot.

It is not possible to see the purchasing options from the instance details. Instead, you can go to EC2, Spot Requests to see 2 active spot requests, both fulfilled.

Alternatively, using CLI, the following will show details of the 2 Spot Instances.

aws ec2 describe-spot-instance-requests

If you select EC2, Spot Requests and click “Savings Summary” you can see A high-level summary of your savings across all of your running and recently terminated Spot Instances

To clean up:

Delete the ASG and Launch Configuration. The instances will be terminated, and in a few minutes you will see that the spot requests are closed.

The docs are here

Auto Scaling Groups with Multiple Instance Types and Purchase Options

and here

Introducing the capacity-optimized allocation strategy for Amazon EC2 Spot Instances

and for information about how the spot pricing model has changed recently

New Amazon EC2 Spot pricing model: Simplified purchasing without bidding and fewer interruptions

 

Amazon Certificate Manager Private Certificate Authority

In the course “Security Engineering on AWS” there are 3 slides on ACM Private CA.

From the courseware:

ACM Private Certificate Authority (CA) is a managed private CA service that helps you easily and securely manage the lifecycle of your private certificates. ACM Private CA provides you a highly-available private CA service without the upfront investment and on-going maintenance costs of operating your own private CA. ACM Private CA extends ACM’s certificate management capabilities to private certificates, enabling you to manage public and private certificates centrally and have ACM renew your private certificates automatically on your behalf. You also have the flexibility to create private certificates for applications that require custom certificate lifetimes or resource names. With ACM Private CA, you can create and manage private certificates for your connected resources in one place with a secure, pay as you go, managed private CA service.

To expand on the slide

  1. The CA Admin uses the Private CA service to create a CA that is subordinate to an existing on-premises company CA.
  2. The subordinate CA certificate is signed by the existing on-premises company CA
  3. The signed certificate is imported back into the Private CA

At the time of the courseware and above slide, the Private CA could only be used with an existing company root or intermediate CA. The idea is that this existing CA is trusted by the end entities, for example browsers, on company managed clients and devices.

However, since the course slide above, in 2019, Private CA now supports the creation of a root CA. There is no need for an external CA.

That allows you to create a highly available CA hierarchy without maintaining the infrastructure.

As usual, I wanted to create a simple demo or walkthrough.

I will create a Root CA. Then a Subordinate CA, which will have a certificate signed by the root. The benefit of a hierarchy like this is that it gives you the flexiblity to restrict access to the root CA, with more permissive access to the subordinate CAs which will issue certificates in bulk. We may also want to delegate sub CAs for different applications, for example one to issue certificates for servers and another for IOT devices, and they might have different admins. Also, auditing and alarms can then be specific to the CA, for example we might want alarms when the root CA issues a certificate.

The prerequisite knowledge for this walkthough is of EC2 and ALB.

To setup a scenario for testing:

I created an EC2 instance with a web server displaying a “Hello World” page.

I created a target group “mytg” with the single instance registered with it.

I created an ALB “myalb” with a standard HTTP listener, and tested it by pasting the DNS name of the ALB into a browser.

To create a Root CA:

Go to Certificate Manager, Private CAs, Get Started.

Select “Root CA”

Configure the CA Name as follows

Organisation: myorg (or whatever you want)

Common Name: myorg root CA (or whatever you want)

Take all the defaults clicking “Next” until the message “Your Certificate Authority was created successfully. Install a CA certificate to activate your CA”. Press Get Started, next, confirm and install.

You should see that the root CA is now active.

To create the subordinate CA:

Go to Certificate Manager, Private CA, Create CA, Subordinate CA.

Organization: myorg

Common Name: myorg subordinate CA

Take all the defaults to finish, confirm and create.

On the success prompt, “Install a CA certificate to activate your CA”, click to get started.

Select ACM private CA to choose the parent CA, select the parent CA from a drop down menu (there will be only one, but what you see in the drop down is its ARN). Once selected, it displays the parent CA type of “root” and its common name.

Note the path length of 0, with choices 0-4. Leave this at 0, which means that this CA will issue certificates directly to end entities rather than to other CAs. Click generate.

Now in the console we see the subordinate CA is active.

To request a certificate:

Go to Certificate Manager, request a certificate, request a private certificate, select the subordinate CA from a drop down list.

Choose a domain name. I used alb.thetrainit.com. You can replace this with whatever you want.

In the console we see the certificate is issued but “In use” is “No”.

Now to associate the certificate with the ALB:

Go to the ALB, listeners tab, add listener, HTTPS, add action, forward to, you will see a drop down list of target groups, mine was called “mytg”.

Because we are using HTTPS we have to choose a certificate. In “Default SSL Certificate” choose the certificate from a drop down list. Click Save.

Go to Certificate Manager to see the “In use” change to “Yes”

Ensure that the Security Group associated with the ALB allows port 443.

At this point you could paste the DNS name of the ALB into a browser tab and prefix “https://” to see browser warnings and the certificate details. You should see that the certificate is not trusted. The error or warning message depends on the browser.

In order for your device to trust the certificate, it needs to trust the Root CA:

In Certificate Manager, Private CAs, Select the Root cert, Action, Export CA Certificate, select “Export certificate body to file” to download the certificate to your client device. It is a PEM file called “certificate.pem”

In the browser, import the certificate into the browsers certificate store. How this is done depends on the browser. In Chrome, its under settings, advanced, Privacy and Security, Manage Certificates and the certificate ends up in the list of Trusted Root CAs.

Now repeat the test of browsing to the ALB. This time the certificate should be trusted but there will still be a warning. This is because the DNS name of the ALB does not match the common name in the certificate.

You could leave it there for this test. In my test, I already have a Route 53 public hosted zone and a registered domain, so I created an ALIAS record mapping alb.thetrainit.com to the DNS name of the ALB.

Then browsing to https://alb.thetrainit.com worked without warnings.

To clean up, as there is a monthly cost for Private CA.

Delete the ALB.

Delete the certificate that was used for the ALB listener

Disable and delete the Sub CA and Root CA

 

 

 

Permissions Boundaries

There is one slide in the “Security Engineering on AWS” course about Permissions Boundaries. The text is as follows:

A permissions boundary is an advanced feature in which you limit the maximum permissions that a principal can have. These boundaries can be applied to AWS Organizations organizations or to IAM users or roles.

As the IAM administrator, you can define one or more permissions boundaries using managed policies and allow your employee to create a principal with this boundary. The employee can then attach a permissions policy to this principal. However, the effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined.

When you use a policy to set the permissions boundary for a user, it limits the user’s permissions but does not provide permissions on its own.

It might be a challenge to understand the use case and how it is used in practice. There is a helpful blog post here.

Say you have a development team with full access to certain services, but no access to IAM.

Say you want the developer to assign a role to EC2, and you want them to create the role, a policy, attach the policy to the role, and attach the role to an EC2 instance. That is a lot of permissions, and instead we want to keep to the principal of least privilege.

The example in the blog post is lengthy to set up, so I simplify a bit here, and omit the JSON for brevity (see the referenced blog post above for the detailed steps)

The admin does the following tasks:

First the admin creates a boundary policy “DynamoDB_Boundary_Frankfurt“. The policy will allow put, update, delete on any table in the Frankfurt region.. Employees will be required to set this policy as the permission boundary for the roles they create.

Create “Employee_Policy”. This policy will allow the employee to create roles and policies, attach policies to roles, and attach roles to resources. However the admin wants to retain control over the naming conventions of the roles and policies for ease of auditing. The roles and policies must have the prefix MyTestApp. Also, there will be a condition that the above permissions boundary policy must be attached to the role, otherwise the role creation will fail.

Create a role “MyEmployeeRole” and attach the “Employee_Policy”.

Create a policy “Pass_Role_Policy” to allow the employee to pass the roles they create to services such as EC2.

Attach the policy to “MyEmployee_Role”

The Employee does the following tasks:

Create a role “MyTestAppRole”. The employee must provide the permissions boundary “DynamoDB_Boundary_Frankfurt”, otherwise the role creation will fail. The policy will allow EC2 to assume the role.

Create a policy “MyTestApp_DDB_Permissions” allowing all DynamoDB actions on a specific table MyTestApp_DDB_Table.

Attach the policy to “MyTestAppRole”

To summarise, the administrator created a permission boundary allowing DynamoDB put, update, delete on all resources in Frankfurt.

The employee created a policy which allowed all actions on a specific table with no region restriction.

The effective permission is to allow put, update, and delete on the specific table MyTestApp_DDB_Table in the Frankfurt region.