AD Connector

 

AD Connector is designed to give you an easy way to establish a trusted relationship between your Active Directory and AWS. When AD Connector is configured, the trust allows you to:

  • Sign in to AWS applications such as Amazon WorkSpaces, Amazon WorkDocs, and Amazon WorkMail by using your Active Directory credentials.
  • Seamlessly join Windows instances to your Active Directory domain either through the Amazon EC2 launch wizard or programmatically through the EC2 Simple System Manager (SSM) API.
  • Provide federated sign-in to the AWS Management Console by mapping Active Directory identities to AWS Identity and Access Management (IAM) roles.

In this walkthrough, I demonstrate the use case of seamlessly joining Windows instances to your on premises AD.

To keep it simple, rather than use an on-premises AD, this will be simulated using an EC2 instance. In real life, you would need a VPN or Direct Connect.  I used a Windows 2012 R2 Base, with an instance type of t2.medium, choosing the default VPC and subnet in AZ A. In real life it would have a static IP address, but the test worked fine using the dynamic private address.

For the Security Group, which in real life would be the on premises firewall, and just for the short duration of the test, I opened up all inbound traffic. See the link below for the actual required ports, which need to allow DNS, Kerberos and LDAP from the CIDR ranges of the AD Connector subnets.

Once ready, RDP to the public address of the instance and configure it as a domain controller.

As it is a while since I worked with AD, I followed the following article.

https://social.technet.microsoft.com/wiki/contents/articles/12370.windows-server-2012-set-up-your-first-domain-controller-step-by-step.aspx

I used the domain name onprem.com.

Create a service account which will be used by the AD connector. Follow the instructions here:

https://docs.aws.amazon.com/directoryservice/latest/admin-guide/prereq_connector.html

Alternatively, you could use a Domain Admin account when creating the AD connector later, but creating a service account with the minimum necessary privileges is best practice.

Now for main part of the walkthrough: creating an AD Connector.

In the AWS console, choose Directory Service, Set up a Directory, AD Connector, Small option. Note that its about $0.05 an hour, but free trail eligible for a month. I chose the default VPC and subnets in AZ A and AZ B.

Supply the Directory name onprem.com and Netbios name onprem.

Supply the IP address of the Domain Controller

Supply the Service Account Username and Password that you created earlier.

It takes about 5 minutes to create the AD Connector.

Now to test it by joining a new Windows Server to the on-premises domain.

Launch a Windows instance. I used 2012 R2 Base again, launching it into the default VPC, subnet in AZ A.

The interesting bit is on the Configure Instance page, where you choose Domain Join Directory and see the onprem.com domain name available. Note that it say that for the domain join to succeed, select an IAM role that has the AWS managed policies AmazonSSMManagedInstanceCore and AmazonSSMDirectoryServiceAccess attached to it.  You can choose to have it create the role you, and supply a name. I called it ADConnectorRole.

I tagged the instance “Member Server”

You can now RDP into it using the credentials of a Domain User account. I used onprem\administrator.

To clean up:

Delete the AD Connector Directory

Terminate the member server and the domain controller

 

 

Private Link

In the Advanced Architecting course, there is a section on Private Link

The text in the course:

AWS PrivateLink is a highly available, scalable technology that enables you to privately connect your VPC to supported AWS services, services hosted by other AWS accounts (VPC endpoint services), and supported AWS Marketplace partner services.

You do not have to have an Internet gateway, NAT device, public IP address, AWS Direct Connect (DX) connection, or VPN connection to communicate with the service. Traffic between your VPC and the service does not leave the Amazon network. With PrivateLink, endpoints are created directly inside of your VPC using elastic network interfaces and IP addresses in your VPC’s subnets.

To use AWS PrivateLink, create an interface VPC endpoint for a service in your VPC. This creates a network interface in your subnet with a private IP address to serve as an entry point for traffic destined to the service.

This lab will focus on one of the use cases in the slide: “Enables you to privately connect your VPC to supported AWS services”. I will use the service EC2 for the test. In other words, I will access the EC2 APIs using interface endpoints. One way of testing that is to use the CLI command:

aws ec2 describe-interfaces.

As a pre-requisite to the lab, I launched an Amazon Linux 2, t2.micro instance in the default VPC in AZ-A in region eu-west-1, with SSH allowed by the Security Group. I gave it an Admin role and used “aws configure” to configure the default region. There are many ways of achieving the same thing. The goal is simply to be able to issue CLI commands.

To keep the lab as simple as possible, I am using a public subnet. However, the same idea applies to private subnets, where an instance would be able to access the AWS APIs without using an internet gateway or NAT.

To test accessing the APIs without interface endpoints, issue the command:

aws ec2 describe-instances.

It should work.

To see the IP address that the command is using to access the APIs, we need to know the names of the AWS service endpoints. The are documented here:

https://docs.aws.amazon.com/general/latest/gr/ec2-service.html

To see the IP address being resolved, issue the command:

dig ec2.eu-west-1.amazonaws.com 

<output omitted> 

;; QUESTION SECTION: ;ec2.eu-west-1.amazonaws.com. IN A 

;; ANSWER SECTION: ec2.eu-west-1.amazonaws.com. 6 IN A 176.32.118.30


Note that the dig returns a public IP. In other words, the EC2 APIs are being accessed over the internet.

Now to create the endpoint:

Choose services, VPC, Endpoints, Create Endpoint, select com.amazonaws.eu-west-1.ec2, or equivalent for your region.

Choose the default VPC, and, to keep it simple, select the subnet in AZA. in real life, select more than one subnet for high availability. An ENI will be created in each subnet that you choose.

Leave “Enable DNS names” selected. For the Security Group, select or create one which allow all taffic in and out. This SG is associated with the interface endpoint and controls access to the ENI.

For Policy, leave it at full access. This controls which user or service can access the endpoint.

Click Create Endpoint

It will be pending for a couple of minutes.

You can look at the details of the endpoint to see the private IP of the created ENI.

From the instance, repeat the ec2 describe-instances command. It should still work as before, but the traffic is now going over the private link.

Repeat the dig command, to see output similar to:

;; QUESTION SECTION:
;ec2.eu-west-1.amazonaws.com. IN A

;; ANSWER SECTION:
ec2.eu-west-1.amazonaws.com. 60 IN A 172.31.45.146

Note that a private address is being returned.

The traffic stays on the AWS network, is more secure, and takes a more optimal path.

An interface endpoint is about $0.01 an hour.

To clean up.

Delete the Interface Endpoint.

File Gateway

As discussed in several AWS courses, Storage Gateway enables your on-premises applications to use AWS cloud storage, for backup, disaster recovery or migration.

In this lab, we will use File Gateway with NFS shares. There will be no requirement for anything on premises, as we will deploy File Gateway as an EC2 instance.

The lab assumes knowledge of how to create a bucket, an EC2 instance, and to SSH into the instance.

The lab takes about half an hour.

Decide on the region you will use. I used eu-west-1.

Create an S3 bucket to serve as the storage location for the file share.

I called it “filegateway-<my initials>”, taking all the defaults, in other words just choose Create Bucket.

Deploy an Amazon Linux 2 t2.micro instance to be used for the NFS client. This would normally be on premises. I took all the defaults, that is, using the default VPC, and gave it a tag Key:Name, Value:NFS Client. For the security group, allow SSH as we will log into it in order to mount the File Gateway share.

Deploy and configure the file gateway appliance as an EC2 instance as follows. In real life, you would normally deploy it as a virtual appliance on ESXi or Hyper-V.

From the console, choose Service, Storage Gateway and click Get Started if prompted.

Select File Gateway, Next, Select EC2 and choose Launch Instance. Select the t2.xlarge instance type. Refer to the documentation for the officially supported instance types. If you choose t2.micro, you will likely errors later when activating the gateway.

On configure instance details, I took all the defaults in order to use the default VPC.

On the storage screen, select Add New Volume and take all the defaults except for enabling “Delete on Termination” so that its not left lying around when we clean up. The device name defaulted to /dev/sbd. The size defaulted to 8GB. In real life it is recommeded to be a minimum of 150GB. This disk will be used for upload cache storage.

I tagged the instance with a Name of “File Gateway Appliance” in order to keep track of it.

For the security group, add some rules as follows. In real life, please consult the documentation.

Ingress port 22 is not needed for this lab, but can be used to directly interact with the file gateway appliance for troubleshooting.
Ingress port 80 from your web browser. This is for gateway activation and can be removed after activation.
Ingress port 111  and 2049 from the NFS client. These are for NFS file sharing data transfer. For the allowed source, I used 172.31.0.0/16, because the client is in the default VPC.
I left the outbound rules as default, which will allow Egress 443 for communication with the Storage Gateway service.

When the instance is ready, note the public IP address.

Return to the Storage Gateway browser tab and click Next.

Select the Public endpoint and choose Next

On the Connect to gateway screen, Paste the public IP address of the file gateway, then choose Connect to Gateway.

On the Activate Gateway screen, give the gateway a name: I chose “File Gateway Appliance”. Choose Activate Gateway.

It takes a few seconds to configure the local disk to use for upload cache storage.

On the Configure local disks screen, you may get a warning that the Recommended minimum disk capacity is not met. It is recommended to be at least 150GB. This disk is used for upload cache.

Choose Configure Logging and Disable logging, Save and Continue.

When the status of the gateway is Running:

Choose Create File Share, configure the name of the name of the bucket created earlier, choose NFS, then Next.

On the Configure how files are stored in Amazon S3 screen, select S3 standard and choose Next.

A role will be created for the gateway to access the bucket.

On the review screen, you may see a warning that the share will accept connections from any NFS client. To limit the share to certain clients, choose Edit next to Allowed clients, and edit the CIDR block. I used 172.31.0.0/16 as the client is in the default VPC. Choose Create file share and wait for it to change to be available. This takes about a minute.

Select the share, and you will see the command to mount the file share on Linux, similar to this:

mount -t nfs -o nolock,hard 172.31.42.177:/filegateway-smc [MountPath]

Log in to the NFS client and create a directory to use to sync data with the bucket.

sudo mkdir -p /mnt/nfs/s3

Mount the file gateway file share using the mount command displayed earlier, replacing the mount path with the path above, for example:

sudo mount -t nfs -o nolock,hard 172.31.42.177:/filegateway-smc /mnt/nfs/s3

If this fails, it is likely to do with the security group rules created earlier.

Verify the share has been mounted:

df -h

Create a file, for example nano file1.txt and add a line “This is file1.txt”

copy it to the mount path.

cp -v file1.txt /mnt/nfs/s3

Verify the file appears in the bucket. For me, it appeared either immediately or within a few seconds.

Try copying some more files, editing a file, and deleting a file.

To clean up:

Be careful of the order, I have not tried doing it in a different order, but I would not be surprised if there were problems.

Select the Storage gateway, Actions, Delete gateway. It is removed immediately.
From the EC2 console, terminate the File Gateway Appliance, as it is not terminated automatically.
Terminate the NFS Client instance.
Delete the S3 bucket.

Well Architected Framework

The AWS Well-Architected Framework was developed to help cloud architects build secure, high-performing, resilient, and efficient application infrastructure.

The Cloud Practitioner and Architecting on AWS courses both have one slide on each pillar.

They are a set of white papers which can be downloaded here

They are recommended reading for the Architecting Associate exam.

I summarise here, adding some examples to the bullet points that are presented in the official course ware.

Operational Excellence

When creating a design or architecture, you must be aware of how it will be deployed, updated, and operated. It is imperative that you work towards defect reductions and safe fixes and enable observation with logging instrumentation.

  • Perform operations as code.

For example, use Cloud Formation templates, Systems Manager Run Command scripts and shell scripts, and place them under a source control system such as Github.

  • Annotate documentation.

For example, the code or script itself can describe its intended function

  • Make frequent, small, reversible changes.

Making small changes reduces the scope and impact of the change . Many of the deployment services support the ability to roll back changes.

  • Refine operations procedures frequently.

When the workload changes to improve it, change runbooks, scripts, and documentation

  • Anticipate failure.

Build a replica of your production environment, possibly using simulated data, and for example, terminate EC2 instances to check how the application recovers. Disable a microservice and, depending on its criticality to the overall system, check how the application as a whole is affected.

  • Learn from all operational failures.

When you find a problem, update procedures, scripts and documentation. In other words, don’t make the same mistake twice.

Security

Security deals with protecting information and mitigating possible damage.

  • Implement a strong identity foundation.

Use the principle of least privilege. For example, if an EC2 instance needs to write to S3, it doesn’t need read permission, and it doesn’t need long term credentials. Instead, it can use a role. An operator may not need access to the data in order to maintain the system.

Review privileges on a regular basis, for example, in case someone has been given a privilege for a one-off event. Does a user need Full Access to service? Many AWS managed policies provide full access, where it may not be required.

Cloud Trail can help identify what a user is has done, and therefore what privilege they require.

  • Enable traceability.

Be able to track who did what and when using Cloud Trail and AWS Config. Use Cloud Trail as the basis for checking if an action is normal or not, possibly using machine learning.

  • Apply security at all layers.

Apply to all layers (e.g., edge network, VPC, subnet, load balancer, every instance, operating system, and application).

  • Automate security best practices.

For example, use Cloud Formation templates, where the security can be built into the templates themselves. Use Cloud Formation Drift detection to detect manual changes. Use AWS Config to automate the remediation of non-compliant configurations.

Customize the delivery of AWS Cloud Trail and other service-specific logging to capture API activity globally and centralize the data for storage and analysis.

Integrate the flow of security events and findings into a notification and workflow system such as a ticketing system, a bug/issue system, or other security information and event management (SIEM) system.

  • Protect data in transit and at rest.

As required to meet legal and compliance requirements. Classify your data into sensitivity levels and use where appropriate.

  • Prepare for security events.

For example, if a system is compromised, have a procedure/Playbook to follow. For example, adjust security groups rules to isolate a compromised instance and de-register it from a load balancer in order to contain the incident. Place the instance on a forensic network for analyses.

Reliability

The reliability pillar encompasses the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as mis-configurations or transient network issues.

  • Test recovery procedures

Use automation to simulate different failures or to recreate scenarios that led to failures before.

  • Automatically recover from failure.

Use automated recovery processes that work around or repair a failure. For example, use multiple regions or AZs.

Know the availability requirements of the application. For example 99.99% availability works out at  1 hour per year. If manual intervention is involved, that is an unrealistic figure.

  • Scale horizontally to increase aggregate system availability.

For example, use EC2 Auto Scaling, replacing one large resource with multiple smaller ones to reduce the impact of a single failure. Be aware of limits, for example limits in number of instances or IP addresses may cause issues with Auto Scaling.

  • Stop guessing capacity

Monitor demand and system utilization, and automate the addition or removal of resources to maintain the optimal level to satisfy demand without over or under provisioning. For example, use AWS Auto Scaling, which includes the scaling of Dynamo DB.

  • Manage change in automation.

For example, use Cloud Formation Change Sets to manage changes in the infrastructure.

Performance Efficiency

When considering performance, you want to maximize your performance by using computation resources efficiently and maintain that efficiency as the demand changes.

  • Democratize advanced technologies.

Technologies such as NoSQL can become services so you can focus on product development. . In situations where technology is difficult to implement yourself, consider using a vendor. In implementing the technology for you, the vendor takes on the complexity and knowledge, freeing your team to focus on more value-added work.

  • Go global in minutes

Go global through the use multiple regions or reducing latency using of edge locations. Cloud Front, Web Application Firewall and Lambda @edge all integrate with the edge locations.

  • Use serverless architectures.

They remove the need for you to run and maintain servers. They offer high performance at low cost.

  • Experiment more often.

Test out new ideas in a way that is not possible with on-premises hardware. Quickly carry out comparative testing using different types of instances, storage, or configurations.

  • Apply mechanical sympathy.

Use the technology approach that aligns best to what you are trying to achieve. If a new service is released, research it as it might save you money.

Consider data access patterns when you select database or storage approaches.

Cost optimization

The ability to avoid or reduce unneeded cost

Cost optimization is an ongoing requirement of any good architectural design. The process is iterative and should be refined and improved throughout your production lifetime.

  • Adopt a consumption model.

For example, you can stop test resources when they are not in use. Right size instances by monitoring how the resources are being used.

  • Measure overall efficiency.

Measure the costs associated with delivering an application or service, for example using cost allocation tags. The goal is to deliver business value at the lowest price point.

  • Stop spending money on data center operations.

AWS does the heavy lifting of racking, stacking, and powering servers, so you can focus on your customers and business projects rather than on IT infrastructure.

  • Analyze and attribute expenditure.

Using tags, you can attribute costs to the various business owners.

Often, the cost of data transfer is overlooked.

  • Use managed services to reduce cost of ownership.

This removes the operational burden of maintaining servers.

 

AWS License Manager

In the “Architecting on AWS Accelerated” course there is 1 slide on License Manager.

License Manager was launched in 2018.

From the documentation:

“AWS License Manager makes it easier to manage your software licenses from software vendors such as Microsoft, SAP, Oracle, and IBM across AWS and on-premises environments. AWS License Manager lets administrators create customized licensing rules that emulate the terms of their licensing agreements, and then enforces these rules when an instance of EC2 gets launched”

As a very simple test to get started, I created a custom AMI of a Windows Server instance. Note that in practice, you would have to check the license agreement with the vendor. As an example, some Windows products require dedicated hosts or dedicated instance tenancy, while others can be used with shared tenancy. These requirements can be emulated using License Manager rules.

To get started:

Launch a Windows Server t2.micro instance.

Create an Image of the instance.

In License Manager, Create a License Configuration. Give it a name.

For License Type, choose vCPU, Number of vCPUs 1, and tick Enforce license limit. Submit.

Select the License Configuration, Action, Associate AMI and associate it with your custom AMI.

Note that Licences Consumed is 0 out of 1.

Launch an instance using the custom image. I chose a t2.micro which has 1 vCPU.

As soon as it is running, in License manager the vCPUs in use is 1 out of 1.

Launch another. It fails as expected with a message:

To clean up:

  • Terminate the instance that was launched successfully
  • In Licence Manager, click on the License Manager Configuration, then the Associated AMIs tab and disassociate the AMI. Delete the License Configuration.

Of course there are a lot more features,  here are some additional ones:

  • License Manager simplifies the management of your software licenses that require Amazon EC2 Dedicated Hosts.
  • License Manager provides a mechanism to discover software running on existing EC2 instances using AWS Systems Manager
  • License Manager is integrated with Amazon EC2, AWS Systems Manager, AWS Organizations, AWS Service Catalog, and AWS Marketplace.

 

AWS Pricing Calculator

Most AWS Instructor Led Courses describe the use of the AWS Simple Monthly Calculator to estimate your monthly bill. This tool became known by some as the “Not So Simple Monthly Calculator”.

On June 30, 2020, AWS will no longer support the Simple Monthly Calculator and all users will be redirected to the newer “AWS Pricing Calculator”.

AWS Pricing Calculator was launched in 2018 and has been evolving to include more services.

“AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your use cases on AWS. You can model your solutions before building them, explore the price points and calculations behind your estimate, and find the available instance types and contract terms that meet your needs.”

The tool simplifies the choices that you have to make, for example you can get a quick estimate for EC2 instances without delving too deeply into the different EC2 options.

To get started, I input some data loosely based on the “Large Web Application” sample that was supplied with the Simple Monthly Calculator. This is a Ruby on Rails application serving approximately 100,000 page views per month.

I added the following services and supplied data:

Route 53

  • 1 hosted zone
  • 1 million standard queries per month

Elastic Load Balancer

  • 1 Application Load Balancer
  • 10 GB processed bytes per month

EC2

  • 4 * m5.large, 1 year reserved term. No upfront payment EC2 Instance Savings Plans rate
  • 300 GB SSD gp2

RDS

  • 2 Node Aurora MySQL Compatible instance, db.r5.large. 1 year reserved term. No upfront payment.
  • 20 GB storage

S3

  • 30 GB S3 Standard storage.
  • 10000 PUT, COPY, POST, LIST requests
  • 100000 GET, SELECT requests
  • 300 GB Outbound Data Transfer

You can save and share your estimate as a URL. It will be saved for 3 years.

This is my saved URL

You can also export it as a csv:

 

Wild Rydes: Building the Future of Unicorn Transportation

Fans of Wild Rydes will know that you can request a Wild Ryde from any location. Today, the CTO of Wild Rydes made this announcement:

“In order to protect the safety of our employees and customers, pick up points will be at designated landing pads. These includes parks and open spaces.”

A unicorn, who wished to remain anonymous, told us:

“I welcome this decision. Flying along streets is fine in science fiction films like The Fifth Element, but has proved too dangerous in real life”

April 1st, 2020