The term “EBS backed” and “Instance Store Backed” may be confusing. When you launch an instance you see this familiar screen. Note that for every single AMI the Root Device Type is “EBS backed”.
In other words for all these the root device will be EBS. If you choose community AMIs you can find some which are Instance Store backed.
I chose a Windows 2012 R2 Base AMI and click Next.
The Instance Storage column is marked “EBS Only” for many of the types, but here I have chosen an i3.large (about $0.15 an hour) which comes with 1*475 (SSD). Click Next.
This shows me that the root volume is EBS, but it allows me to add more volumes. The additional volumes can be EBS or Instance Store. If I choose Instance Store I can only add exactly 1 instance store volume, which matches the 1*475 (SSD) on the previous screen. I can’t choose the size.
It also allows me to add multiple EBS volumes if I choose EBS in the drop down.
Once launched, you see the D: drive. There is a text file named “Important” in the root of the drive.
Instance store can also be used for any instances which do not need retain their volumes after the instance is terminated, for example after a scale in activity.
You may be using an AMZ Linux based instance to become familiar with the AWS CLI, as used extensively in the Systems Operations on AWS official training course.
The AWS CLI is installed by default on the AMZ Linux AMI, but command completion is not enabled.
The command is:
complete -C ‘/usr/bin/aws_completer’ aws
At the time of writing in April 2018, the most common number of AZs in a region is 3. The next most common number is 2.
The following from the S3 FAQ seems to be inconsistent with that:
Q: What is an AWS Availability Zone (AZ)?
An AWS Availability Zone is an isolated location within an AWS Region. Within each AWS Region, S3 operates in a minimum of three AZs, each separated by miles to protect against local events like fires, floods, etc.
My assumption is that there is a hidden geographically isolated facility where required.
Sometimes the course notes do in fact use the word “facility” eg:
By default, data in Amazon S3 is stored redundantly across multiple facilities and multiple devices in each facility.
There is a lab on Serverless Architecture in the Architecting course, making use of S3, Lambda, SNS, SQS and DynamoDB to process a file of customer banking transactions.
However there is no lab on API Gateway. Even though hands-on knowledge is not required for the exam, it is always good to try things out.
There is a walkthough Build a Serverless Web Application on the AWS site.
The exercise uses API Gateway, S3, Lambda and DynamoDB. For good measure, although it is not covered in the Architecting course or exam, it also uses Cognito for authentication.
The exercise takes about 2 hours. Cloud Formation scripts are also provided, but then you miss all the learning points.
Basically you sign in to the application using your email address, and then click on a map to order a taxi, except in this case its a flying taxi in the form of a Unicorn. When you click on a map and order a Unicorn, your coordinates and username are posted to API gateway, which triggers Lambda to find the nearest available Unicorn. Lambda also logs your ride in DynamoDB.
It seems there are many approaches to deploying WordPress on AWS
For example, there are cloud formation templates available. When I used these, they didn’t quite work out of the box, and I had to change some permissions on the Linux file system when it came to doing things like uploading media.
Given that I was going to have to mess around with Linux, my next approach was to follow a walkthough on the AWS site. You install a LAMP stack manually, then install and configure WordPress manually. Its a good learning exercise.
Another option is using LightSail, if it fits your needs.
In the end, I went for a Bitnami WordPress AMI from the Marketplace
For a low cost, low volume site, at the time of writing this site is running on a t2.nano, at less than $5 a month. You could use a t2.micro to be free-tier eligible if your account is still in its first year.
Later I might go for a t2.micro spot instance:
Relatively stable at $0.0038 for 3 months, that works out at $2.73 a month.
Snapshots are a way to backup your volumes.
There are no labs specifically on taking snapshots in the course. In my opinion it would be a reasonable lab for an introductory course.
A good way of understanding snapshots is to try it.
- Take snapshot1
- Make a change
- Take snapshot2 (this is incremental i.e. only changed blocks)
- Delete snapshot1
- Restore using the second snapshot.
Behind the scenes, when you deleted the first snapshot, AWS will ensure that the second snapshot contains all the blocks required for it to be fully usable.
To try it:
Launch an instance. I used Windows.
Navigate to Volumes>Snapshots
Take a snapshot
Make a change. I made a new folder on the desktop.
Take another snapshot. The snapshot will be incremental i.e. it will only contain the changed blocks since the first snapshot.
Now for the magic bit. Delete the first snapshot. Behind the scenes, AWS will ensure that the second snapshot contains all the blocks required for the second snapshot to be fully usable.
Stop the instance and detach the volume from it.
Create a volume from the second snapshot and attach it to the instance.
Ignore the instructions about the recommended device names in the window below, and use /dev/sda1. This makes it the root volume.
Start the instance.
It will be restored to the point at which you took the first snapshot.
As you know, snapshots are stored in S3, but not in any bucket you can see. It is also difficult to know exactly how much space is being used by snapshots. People are concerned that if they take frequent snapshot this will result in a large bill. One approach to this is to script the creation of snapshots, for example to take a daily snapshot and then delete all the older snapshots.
There is an interesting document Reference Architecture for Scalable WordPress-Powered Websites