WildRydes Real Time Data Processing

There is an awesome project here  to process real-time data streams. It is a great way to learn the basics of the Kinesis family. As always, here I summarise the project but also add my own comments, hints and tips.

It is based on the fictional WildRydes ride sharing organisation, where customers can order a ride on one of our fleet of unicorns.

Each unicorn has been fitted with a sensor which sends back location and health information once per second to our operations centre.

The first step is to configure a Kinesis Data Streams. We supply a name, and configure the number of shards. In this case, 1 shard will suffice.

As is the case for many projects which involve some code, the project creates a Cloud9 IDE as an environment to run a supplied producer script to send data to the stream. If you have not used Cloud9 before that’s no problem. Instructions are supplied and the IDE is available in minutes.

Once running the producer script, a consumer script displays the once per second output from each unicorn.

A supplied dashboard, running somewhere in AWS, can graphically display the movement of the unicorns. We supply the dashboard with a Cognito Identity Pool ID to give it unauthenticated access to the stream.

In Kinesis, you can monitor the stream. After a while the number of incoming records and Bytes should become stable.

It can be tricky to interpret the data. Hover the mouse over the text that says “Incoming Records Limit” and click the “x” sign to remove it, to display the above amount of incoming records. Hover the mouse over the “IncomingRecords” to see that it is 300 over the last 5 minute interval. This screenshot was taken when there was 1 unicorn flying generating data at 1 second intervals, so that makes 300 records in 5 minutes.

A similar graph shows the number of Bytes:

Which is about 58K over a 5 minute interval, the flat bit above.

Well you could leave the project there, and if you have not seen Kinesis Data Streams before you already have an idea of what it can do.

If you want to continue, you create an Amazon Kinesis Data Analytics application to read from the Amazon Kinesis stream, and calculate the total distance travelled for each Unicorn.

Referring to the architecture diagram above, you specify the input data stream, and a second data stream for the output.

The application automatically discovers the “schema”. We copy and paste some SQL code to aggregate and transform the data to calculate the total distance travelled and send that once per minute to the output stream.

A supplied consumer script shows the results of second stream:

Notice the time. Subsequent records will appear exactly on the minute.

The next part of the project is to create a Lambda, triggered by the output stream above, which writes the records to DynamoDB as they arrive.

Here I am using the console based query editor to query for the per minute items for a particular Unicorn and date.

The last part of the project is to demonstrate Firehose, where we want to save the raw sensor data from the initial stream into S3, for later analyses or ad-hoc queries using Athena.

We create a Firehose Delivery Stream, selecting the original source stream containing the per second sensor data, and a bucket for the output, specifying a frequency of delivery, where we choose 60 seconds. So every 60 seconds, a new file containing the per second data will be created in S3.

To demonstrate using Athena for ad-hoc queries, we create an “external table” which is telling Athena about the format of the data in S3.

Now we can do queries from within the Athena console. Here are a few ideas I tried out:

select distinct name FROM wildrydes
select count(distinct name) FROM wildrydes

select * FROM wildrydes where name='Shadowfax'

select * FROM wildrydes where healthpoints<160
select name,statustime FROM wildrydes where healthpoints<160
select name,statustime,healthpoints FROM wildrydes where healthpoints<160 order by statustime desc
select name,statustime,healthpoints FROM wildrydes where name='Shadowfax' and healthpoints<160 order by statustime desc

The project has a clean up step, as alway. You do *not* want to leave the Kinesis stuff running! However, what I found is that the Lamba/S3/DynamoDB is, as always, nearly zero cost for this type of project, so if you want to come back to the project quickly and leave some things in place:

  • Delete the Firehose Delivery Stream.
  • Delete the Data Analytics App
  • Delete the 2 Data Streams

Then when coming back to the project, the whole thing can be quickly brought up in a few steps:

  • Recreate the 2 Data Streams.
  • Start Cloud9 and the producer script
  • Recreate the Kinesis Analytics Application
  • Delete and recreate the Lambda trigger
  • Recreate the Firehose Delivery Stream


Currently, due to the pandemic, the fleet of Unicorns are not actually flying, so for the moment, all the above incoming data is only simulated.

Sorry about that.



AWS License Manager

In the “Architecting on AWS Accelerated” course there is 1 slide on License Manager.

License Manager was launched in 2018.

From the documentation:

“AWS License Manager makes it easier to manage your software licenses from software vendors such as Microsoft, SAP, Oracle, and IBM across AWS and on-premises environments. AWS License Manager lets administrators create customized licensing rules that emulate the terms of their licensing agreements, and then enforces these rules when an instance of EC2 gets launched”

As a very simple test to get started, I created a custom AMI of a Windows Server instance. Note that in practice, you would have to check the license agreement with the vendor. As an example, some Windows products require dedicated hosts or dedicated instance tenancy, while others can be used with shared tenancy. These requirements can be emulated using License Manager rules.

To get started:

Launch a Windows Server t2.micro instance.

Create an Image of the instance.

In License Manager, Create a License Configuration. Give it a name.

For License Type, choose vCPU, Number of vCPUs 1, and tick Enforce license limit. Submit.

Select the License Configuration, Action, Associate AMI and associate it with your custom AMI.

Note that Licences Consumed is 0 out of 1.

Launch an instance using the custom image. I chose a t2.micro which has 1 vCPU.

As soon as it is running, in License manager the vCPUs in use is 1 out of 1.

Launch another. It fails as expected with a message:

To clean up:

  • Terminate the instance that was launched successfully
  • In Licence Manager, click on the License Manager Configuration, then the Associated AMIs tab and disassociate the AMI. Delete the License Configuration.

Of course there are a lot more features,  here are some additional ones:

  • License Manager simplifies the management of your software licenses that require Amazon EC2 Dedicated Hosts.
  • License Manager provides a mechanism to discover software running on existing EC2 instances using AWS Systems Manager
  • License Manager is integrated with Amazon EC2, AWS Systems Manager, AWS Organizations, AWS Service Catalog, and AWS Marketplace.


Global Accelerator Part 2

In November 2019 AWS launched a new 5 day course “Architecting on AWS Accelerator” which includes content from the “Advanced Architecting on AWS” course.

It covers Global Accelerator, so I am revisiting it for the scenario where it is used in front of Load Balancers.

The above slide shows the current model without Global Accelerator.

The problem with it is caching.

Say a user browses to www.example.com and is returned the IP of the LB in us-east-1. Then the LB in us-east-1 goes down for some reason. It will be replaced and its IP will change. The old IP has a high likelihood of being cached either locally or at a DNS cache.

The user browses to www.example.com

When we configure Global Accelerator we receive 2 static IPs.

We create a record set in Route 53 with these 2 IPs.

The browser will try to make a connection to one of the IPs.

Global Accelerator will typically direct the user to the LB nearest the user. In addition, the user will be routed via the nearest edge location, and from their the traffic will traverse the AWS network rather than the internet.

From the Global Accelerator home page:

To improve the user experience, AWS Global Accelerator directs user traffic to the nearest application endpoint to the client, thus reducing internet latency and jitter. It routes the traffic to the closest edge location via Anycast, then by routing it to the closest regional endpoint over the AWS global network. AWS Global Accelerator quickly reacts to changes in network performance to improve your users’ application performance.

Global Accelerator is also using health checks. In this scenario, rather than use its own health checks, it relies on the health checks of the LB.

To set this scenario up:

As a prerequisite I set up the following:

An instance in us-east-1 with a simple web page identifying its region eg “Hello from us-east-1”

An ALB in us-east-1 targeting the single instance.

I repeated the above for eu-west-1.

I tested access to the two ALB’s by browsing to their DNS names.

Now for the Global Accelerator part.

From the Global Accelerator console, create a Global Accelerator. Give it a name.

Add a Listener on TCP/80.

Add an Endpoint Group, choose us-east-1 from a drop down. Repeat for eu-west-1.

Add Endpoint for us-east-1, of type ALB, and choose the ARN of the ALB from a drop down.

Repeat for eu-west-1.

It takes about 5 minutes to deploy, and you receive 2 static IPs.

Put either one of them in a browser. You should see the web page of the instance in your closest region.

Stop that instance.

Wait until the target group indicates that the instance is unhealthy.

Refresh the browser. You should see the web page of the instance in the other region.

If you have a Route 53 hosted zone, create a record set. I used ga.thetrainit.com. Add the 2 static IPs of the GA.

Browse to ga.thetrainit.com to test it.

Start the stopped instance, wait for the target group to indicate that it is healthy and test again.

Some further notes:

If I do an nslookup on ga.thetrainit.com it returns the two IPs in a fairly unpredictable order.

From the FAQ:

If one static IP address becomes unavailable due to IP blocking, or unreachable networks, AWS Global Accelerator will provide fault tolerance to client applications by rerouting to a healthy static IP address from the other isolated network zone….With AWS Global Accelerator there is no reliance on IP address caching settings of client devices. The change propagation time is a matter of seconds, thereby reducing downtime of your applications.

To clean up:

Disable the Accelerator, which only took a few seconds for me.

Delete the Accelerator.

Delete the Route 53 record set.

In each region, delete the ALB, Target Group, and terminate the instance.

Some snippets from the FAQ

Q: How does AWS Global Accelerator work with Elastic Load Balancing (ELB)?

If you have workloads that cater to a global client base, we recommend that you use AWS Global Accelerator. If you have workloads hosted in a single AWS Region and used by clients in and around the same AWS Region, you could use an Application Load Balancer or Network Load Balancer to manage such resources.

Q: How is AWS Global Accelerator different from Amazon CloudFront?

Global Accelerator and CloudFront are two separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable (e.g., images and videos) and dynamic content (e.g., APIs acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in a single or multiple AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT) or Voice of IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic fast regional failover. Both services integrate with AWS Shield for DDoS protection.


In the “Security Engineering on AWS” course there are two slides on X-Ray.

From the course notes:

AWS X-Ray is a service that collects data about requests that your application serves, and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. For any traced request to your application, you can see detailed information not only about the request and response, but also about calls that your application makes to downstream AWS resources, microservices, databases and HTTP web APIs.

There is a very cool sample application to get started.

From the X-Ray console, click “Launch a sample application (Node.js)”

It will take a few minutes. It uses Cloud Formation to create an Elastic Beanstalk application, and Elastic Beanstalk creates a second stack.

When both stacks are complete, in the Elastic Beanstalk click on the Elastic Beanstalk application URL to see the sample application:

Click on the sign-up form and supply a Name and Email address (it dosn’t have to be real). This will be a POST request to the application.

In the X-Ray console, click on “Service Map”, to see a screen similar to this:

The leftmost circle represents the HTTP requests to the web page. The other circles are:

  • A call to the metadata service to retrieve the security credentials supplied by the Instance Profile, used to make the Dynamo DB and SNS API calls.
  • A Dynamo DB table. The application is writing the users details to a Dynamo DB table. If you go to the Dynamo DB console you will see it.
  • An SNS Topic. The application is publishing the users details to an SNS topics. If you go the SNS console, you will see an unconfirmed subscription to the email address you used, and a confirmed subscription from an SQS queue.

In the X-Ray console, click “Traces” to see the URLs accessed by the client.

It will include the initial GET to the web application and a POST to the signup page.

You may also see a GET to the favicon, with a 404 not found.

“A favicon /ˈfæv.ɪˌkɒn/ (short for favorite icon), also known as a shortcut icon, website icon, tab icon, URL icon, or bookmark icon, is a file containing one or more small icons, associated with a particular website or web page.”

Drill down to the details of the trace for the signup page.

You see a timeline including the total time for the POST.

This is broken down into the DynamoDB putitem followed by the SNS Publish with the times for each.

You may also see the calls to the metadata service to retrieve security credentials.

Back in the application, click “Start” and leave it for 2 minutes to make about 10 automated signups per minute. Now we start to see average figures for each of the circles in the service map.

The application intentionally includes signups with a duplicate email address, which causes Dynamo DB to return a 400 error, and the POST to return a 409 error. These errors can be seen in the traces.

“An HTTP 400 status code indicates a problem with your request, such as authentication failure, missing required parameters, or exceeding a table’s provisioned throughput”

Just for fun, I removed the sns:publish permission from the policy attached to the Role that the instance is using.

The service map starts to display orange circles, and you can drill down the traces to see the detail:

AuthorizationError: User: arn:aws:sts: <output ommitted> is not authorized to perform: SNS:Publish on resource: arn:aws:sns:<output omiited>

The POST returns a 500 error.

In summary, X-Ray is helping us to indentify both latency issues and intermittent errors returned by a service.

To clean up, delete the X-ray cloud formation stack, which will in turn delete the Elastic Beanstalk stack.




Global Accelerator

In the Architecting on AWS course, the module on HA discusses how Route 53 could be used for a multi-region HA application. The small print at the bottom of the slide says “Consider using Global Accelerator for stringent SLAs”.

The courseware notes describe it as follows:

Global Accelerator provides your network with greater resiliency by removing the role of DNS in failover. It can protect your users and applications from caching issues and allows nearly instantaneous redirection of traffic to healthy endpoints. Additionally, when you add new endpoints into your architecture, they can receive traffic immediately without waiting for DNS propagation.

Global accelerator uses Static IP addresses that are Anycast from multiple AWS edge locations giving us a fixed entry point address that enables ingress of user traffic at an edge location closest to them

So lets replace the scenario in the slide with Global Accelerator. Getting started and seeing it working only takes a few minutes.

Global accelerator gives you a pair of EIPs, both of which represent your endpoints, which can be EIPs, ALBs or NLBs.

Users are directed to the nearest edge location, and from there, traffic will flow to the nearest endpoint. Using the slide example, users in the U.S. would be directed to the ALB in us-west-1. That concept is similar to Route 53 with an appropriate routing policy. Global Accelerator also supports health checks like Route 53. One difference is that the users traffic flows from their nearest edge location to the endpoint across the AWS backbone network.  Also, as stated in the quote above, you are protected from DNS caching issues, where browsers, applications and resolvers may have all cached a no longer reachable IP.

To set up a Global Accelerator, I launched an EC2 instance in the us-west-1 region, and another in the eu-west-1 region. A web page displayed the region, extracted from the metadata, for ease of testing. I gave each an EIP.

Go to the Global Accelerator service.

Create an accelerator. You are presented with a wizard.

Add a listener on port 80, TCP.

Add two endpoint groups choosing us-west-1 and eu-west-1 from a dropdown list.

Add two endpoints. For each endpoint group, you select the endpoint type as EIP, and select the appropriate EIP from a drop down list.

You are given two static IP addresses either of which can be used for testing. Either IP represents the application and will direct you to the instance in the nearest region.

Global Accelerator is health checking both endpoints. Stop the instance closest to you and you should be directed to the other one.

There is an hourly cost, so to tear it down, disable the Global Accelerator, which takes a minute or so, and delete it. Terminate the instances and release the EIPs.



Serverless Lab

There is a lab on Serverless Architecture in the Architecting course, making use of S3, Lambda, SNS, SQS and DynamoDB to process a file of customer banking transactions.

However there is no lab on API Gateway. Even though hands-on knowledge is not required for the exam, it is always good to try things out.

There is a walkthough Build a Serverless Web Application on the AWS site.

The exercise uses API Gateway, S3, Lambda and DynamoDB. For good measure, although it is not covered in the Architecting course or exam, it also uses Cognito for authentication.

The exercise takes about 2 hours. Cloud Formation scripts are also provided, but then you miss all the learning points.

Basically you sign in to the application using your email address, and then click on a map to order a taxi, except in this case its a flying taxi in the form of a Unicorn. When you click on a map and order a Unicorn, your  coordinates and username are posted to API gateway, which triggers Lambda to find the nearest available Unicorn. Lambda also logs your ride in DynamoDB.


Low Cost WordPress

It seems there are many approaches to deploying WordPress on AWS

For example, there are cloud formation templates available. When I used these, they didn’t quite work out of the box, and I had to change some permissions on the Linux file system when it came to doing things like uploading media.

Given that I was going to have to mess around with Linux, my next approach was to follow a walkthough on the AWS site. You install a LAMP stack manually, then install and configure WordPress manually. Its a good learning exercise.

Another option is using LightSail, if it fits your needs.

In the end, I went for a Bitnami WordPress AMI from the Marketplace

For a low cost, low volume site, at the time of writing this site is running on a t2.nano, at less than $5 a month. You could use a t2.micro to be free-tier eligible if your account is still in its first year.

Later I might go for a t2.micro spot instance:

Relatively stable at $0.0038 for 3 months, that works out at $2.73 a month.


Scalable WordPress on AWS

There is an interesting document Reference Architecture for Scalable WordPress-Powered Websites

CloudFront is used to cache static content from an S3 bucket (plugins are available to integrate with S3), and dynamic content from an Application Load Balancer in front of the web instances, which are in an Auto Scaling group.  Compute optimized instances might be a good choice for the web servers.  An ElastiCache cluster caches frequently queried data. An Aurora MySQL instance hosts the WordPress database. As discussed in the Architecting course, for Autoscaling the web tier should be stateless. WordPress uses cookies for session storage. The DB layer holds persistent data. However, WordPress does also store some data including configuration data on the local hard disk on the instances, so instead the entire WordPress installation directory can be moved onto an EFS shared file system.