Transit Gateway

In “Architecting on AWS”, as of July 2019, there are some slides on Transit Gateway, which was introduced as an alternative to VPC peering, especially when there are large number of peerings.

The scenario is that all 3 VPCs need to connect to an on premise site via the VPN, but are isolated from each other. The on premise site is part of network 10, hence the route to 10/8 in the VPC routing tables, of which only one is drawn. The Transit Gateway has a route to the VPN and all 3 VPCs, but it is not obvious why the VPCs are isolated from each other.

The link between the VPCs and the TGW and between the VPN and the TGW are called attachments. Attachments are associated with a route table. From the console: “Associating an attachment to a route table allow traffic to be sent from the attachment to the target route table. One attachment can only be associated to one route table”. So in the diagram traffic from the bottom VPC is send to the green route table, and traffic from the VPN is sent to the red route table.

To demonstrate this kind of connectivity, I set up a lab. For simplicity, I didn’t use a VPN.

To start with, create 3 VPCs in the same region, with non overlapping CIDR ranges. I used 10.1.0.0/16, 10.2.0.0/16 and 10.3.0.0/16. For simplicity, I created one public subnet in each, using a /24 mask, with an Internet Gateway in each VPC, and modified the route table for each VPC to add a default route to the IGW. Launch an Amazon EC2 instance in each VPC, using a security group which allows SSH and ICMP from anywhere, to keep it simple. I used public subnets simply to make it easy to login to the instances.

To start with, lets establish full mesh connectivity.

Create a Transit Gateway called TGW, leaving all the defaults. It is in the pending state for about 2 minutes. Notice that it creates a route table, with no associations, propogations or routes.

Create an attachment called ATT1 between the TGW and VPC1, leaving all the defaults. It is pending for about 1 minute. Repeat for ATT2 between the TGW and VPC2, and ATT3 between the TGW and VPC3.

Look at the route table again. There is a route to the VPC1 CIDR range via ATT1, and similarly routes to VPC2 and VPC3 via their respective attachements. The routes have automatically propogated from the VPCs

Click the associations tab. The route table is associated with each attachment, in other words traffic from each attachment is using this route table.

Click the propogations tab. This is why the VPC CIDR ranges were propogated to the route table.

We are note quite set up for full mesh connectivity yet. Logon to the instance in VPC1 and try to ping the private IP address of the other instances. The pings fail. This is because the VPC route tables have no route other than the local route and the default route to the IGW.

For each VPC routing table, add a route 10.0.0.0/8 with target TGW.

Repeat the ping test. They should all work. We have achieved full connectivity between all 3 VPCs.

Now to set up partial connectivity, similar to the graphic above, but without the VPN to keep it simple. I want VPC1 to be able to communicate with the other VPCs, but VPC2 and VPC3 not to communicate with each other.

Select the TGW route table and delete all 3 associations. This takes a few seconds.

Create a TGW route table called RT1. It is pending or about 1 minute. Associate it with ATT1.

Create a TGW route table called RT2-3. Associate it with ATT2 and ATT3

For RT1, add a route to the CIDR range of VPC2 and choose attachment ATT2 (this is similar to a target). Add another route to VPC3 via ATT3.

For RT2-3, add a route to the CIDR range of VPC1 via ATT1.

Repeat the ping test. The instance in VPC2 can ping the instance in VPC1 but cannot ping the instance in VPC3.

To clean up, delete everything as TGW costs about $0.05 per attachment per hour. Delete the route table associations, then the route tables, then the attachments, then the TGW.