Permissions Boundaries

There is one slide in the “Security Engineering on AWS” course about Permissions Boundaries. The text is as follows:

A permissions boundary is an advanced feature in which you limit the maximum permissions that a principal can have. These boundaries can be applied to AWS Organizations organizations or to IAM users or roles.

As the IAM administrator, you can define one or more permissions boundaries using managed policies and allow your employee to create a principal with this boundary. The employee can then attach a permissions policy to this principal. However, the effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined.

When you use a policy to set the permissions boundary for a user, it limits the user’s permissions but does not provide permissions on its own.

It might be a challenge to understand the use case and how it is used in practice. There is a helpful blog post here.

Say you have a development team with full access to certain services, but no access to IAM.

Say you want the developer to assign a role to EC2, and you want them to create the role, a policy, attach the policy to the role, and attach the role to an EC2 instance. That is a lot of permissions, and instead we want to keep to the principal of least privilege.

The example in the blog post is lengthy to set up, so I simplify a bit here, and omit the JSON for brevity (see the referenced blog post above for the detailed steps)

The admin does the following tasks:

First the admin creates a boundary policy “DynamoDB_Boundary_Frankfurt“. The policy will allow put, update, delete on any table in the Frankfurt region.. Employees will be required to set this policy as the permission boundary for the roles they create.

Create “Employee_Policy”. This policy will allow the employee to create roles and policies, attach policies to roles, and attach roles to resources. However the admin wants to retain control over the naming conventions of the roles and policies for ease of auditing. The roles and policies must have the prefix MyTestApp. Also, there will be a condition that the above permissions boundary policy must be attached to the role, otherwise the role creation will fail.

Create a role “MyEmployeeRole” and attach the “Employee_Policy”.

Create a policy “Pass_Role_Policy” to allow the employee to pass the roles they create to services such as EC2.

Attach the policy to “MyEmployee_Role”

The Employee does the following tasks:

Create a role “MyTestAppRole”. The employee must provide the permissions boundary “DynamoDB_Boundary_Frankfurt”, otherwise the role creation will fail. The policy will allow EC2 to assume the role.

Create a policy “MyTestApp_DDB_Permissions” allowing all DynamoDB actions on a specific table MyTestApp_DDB_Table.

Attach the policy to “MyTestAppRole”

To summarise, the administrator created a permission boundary allowing DynamoDB put, update, delete on all resources in Frankfurt.

The employee created a policy which allowed all actions on a specific table with no region restriction.

The effective permission is to allow put, update, and delete on the specific table MyTestApp_DDB_Table in the Frankfurt region.

 

KMS Grants

There is one slide in the “Security Engineering on AWS” course which describes KMS Key Policies and Grants

The corresponding text is as follows:

In addition to using AWS IAM to protect key usage, AWS KMS provides two other security mechanisms for protecting your keys: key policies and grants. In order to define resource-based permissions, you need to attach policies to the keys. The policies let you specify who has permission to use the key and what actions they can perform. A key policy specifies who can manage a key and which user or role can encrypt or decrypt with the key. Typically, most users set key policies by using the Encryption Keys section of the IAM console. Either way, key policies share a common syntax with the IAM policy specification.

With grants you can programmatically delegate the use of KMS customer master keys (CMKs) to other AWS principals. You can use them to allow access, but not deny it. Grants are typically used to provide temporary permissions or more granular permissions. You can also use key policies to allow other principals to access a CMK, but key policies work best for relatively static permission assignments. Also, key policies use the standard permissions model for AWS policies in which users either have or do not have permission to perform an action with a resource.

Grants can be revoked (canceled) by any user who has the kms:RevokeGrant permission on the CMK

It may be a challenge to understand the difference between Key Policies and Grants. The text says that “key policies work best for relatively static permission assignments”. A grant does not expire automatically either, but can be revoked. Say you have a workflow where a component is granted access to a key, uses it, and then the access needs to be revoked. This could be achieved by changing the policy but such a frequent change could by mistake affect other users and roles.

Working with Grants uses CLI/SDK and not the console.

The following test assumes some knowledge of creating KMS keys using the console, and some knowledge of using the AWS CLI. A pre-requisite is that you have the AWS CLI installed and access keys configured. I am using an admin user with the AWS managed policy AdministratorAccess.

Using console or CLI, create a KMS CMK. I chose the administrator and user of the key to be my admin user.

Using console or CLI, create a test user with programmatic access but no permissions. I used the name “granttestuser”

Use aws configure to set up a profile for the test user. If you havn’t done this before, the command is:

aws configure –profile granttestuser.

Try to encrypt a file using the profile of the test user. I have a file “hello”for the test.

aws kms encrypt --plaintext "hello" --key-id <key_arn> --profile granttestuser


This will result in an error

An error occurred (AccessDeniedException) when calling the Encrypt operation: User: arn:aws:iam::xxxxxxxxxx:user/granttestuser is not authorized to perform: kms:Encrypt on resource: arn:aws:kms:eu-west-1:xxxxxxxxxx:key/xxxxxxxxx…

Create a grant of the user, with permissions to use the key for encryption.

aws kms create-grant --key-id <key_arn> --grantee-principal <granttestuser's_arn> --operations "Encrypt"



The command returns a grant token and grant id.
{
“GrantToken”: “AQpAMWIyNTQ4MjBi… <output omitted>,
“GrantId”: “c1b42687c23cf408a… <output omitted>”
}

The user supplies the grant token as an argument to the encrypt command

aws kms encrypt --plaintext "hello" --key-id <key_arn> --grant-tokens <grant_token_from_previous_command> --profile granttestuser


It should now work.

The grants can be listed:

aws kms list-grants --key-id <key_arn>


And revoked:

aws kms revoke-grant --key-id <key_arn> --grant-id <grant_id>


 

Global Accelerator

In the Architecting on AWS course, the module on HA discusses how Route 53 could be used for a multi-region HA application. The small print at the bottom of the slide says “Consider using Global Accelerator for stringent SLAs”.

The courseware notes describe it as follows:

Global Accelerator provides your network with greater resiliency by removing the role of DNS in failover. It can protect your users and applications from caching issues and allows nearly instantaneous redirection of traffic to healthy endpoints. Additionally, when you add new endpoints into your architecture, they can receive traffic immediately without waiting for DNS propagation.

Global accelerator uses Static IP addresses that are Anycast from multiple AWS edge locations giving us a fixed entry point address that enables ingress of user traffic at an edge location closest to them

So lets replace the scenario in the slide with Global Accelerator. Getting started and seeing it working only takes a few minutes.

Global accelerator gives you a pair of EIPs, both of which represent your endpoints, which can be EIPs, ALBs or NLBs.

Users are directed to the nearest edge location, and from there, traffic will flow to the nearest endpoint. Using the slide example, users in the U.S. would be directed to the ALB in us-west-1. That concept is similar to Route 53 with an appropriate routing policy. Global Accelerator also supports health checks like Route 53. One difference is that the users traffic flows from their nearest edge location to the endpoint across the AWS backbone network.  Also, as stated in the quote above, you are protected from DNS caching issues, where browsers, applications and resolvers may have all cached a no longer reachable IP.

To set up a Global Accelerator, I launched an EC2 instance in the us-west-1 region, and another in the eu-west-1 region. A web page displayed the region, extracted from the metadata, for ease of testing. I gave each an EIP.

Go to the Global Accelerator service.

Create an accelerator. You are presented with a wizard.

Add a listener on port 80, TCP.

Add two endpoint groups choosing us-west-1 and eu-west-1 from a dropdown list.

Add two endpoints. For each endpoint group, you select the endpoint type as EIP, and select the appropriate EIP from a drop down list.

You are given two static IP addresses either of which can be used for testing. Either IP represents the application and will direct you to the instance in the nearest region.

Global Accelerator is health checking both endpoints. Stop the instance closest to you and you should be directed to the other one.

There is an hourly cost, so to tear it down, disable the Global Accelerator, which takes a minute or so, and delete it. Terminate the instances and release the EIPs.