File Gateway

As discussed in several AWS courses, Storage Gateway enables your on-premises applications to use AWS cloud storage, for backup, disaster recovery or migration.

In this lab, we will use File Gateway with NFS shares. There will be no requirement for anything on premises, as we will deploy File Gateway as an EC2 instance.

The lab assumes knowledge of how to create a bucket, an EC2 instance, and to SSH into the instance.

The lab takes about half an hour.

Decide on the region you will use. I used eu-west-1.

Create an S3 bucket to serve as the storage location for the file share.

I called it “filegateway-<my initials>”, taking all the defaults, in other words just choose Create Bucket.

Deploy an Amazon Linux 2 t2.micro instance to be used for the NFS client. This would normally be on premises. I took all the defaults, that is, using the default VPC, and gave it a tag Key:Name, Value:NFS Client. For the security group, allow SSH as we will log into it in order to mount the File Gateway share.

Deploy and configure the file gateway appliance as an EC2 instance as follows. In real life, you would normally deploy it as a virtual appliance on ESXi or Hyper-V.

From the console, choose Service, Storage Gateway and click Get Started if prompted.

Select File Gateway, Next, Select EC2 and choose Launch Instance. Select the t2.xlarge instance type. Refer to the documentation for the officially supported instance types. If you choose t2.micro, you will likely errors later when activating the gateway.

On configure instance details, I took all the defaults in order to use the default VPC.

On the storage screen, select Add New Volume and take all the defaults except for enabling “Delete on Termination” so that its not left lying around when we clean up. The device name defaulted to /dev/sbd. The size defaulted to 8GB. In real life it is recommeded to be a minimum of 150GB. This disk will be used for upload cache storage.

I tagged the instance with a Name of “File Gateway Appliance” in order to keep track of it.

For the security group, add some rules as follows. In real life, please consult the documentation.

Ingress port 22 is not needed for this lab, but can be used to directly interact with the file gateway appliance for troubleshooting.
Ingress port 80 from your web browser. This is for gateway activation and can be removed after activation.
Ingress port 111  and 2049 from the NFS client. These are for NFS file sharing data transfer. For the allowed source, I used 172.31.0.0/16, because the client is in the default VPC.
I left the outbound rules as default, which will allow Egress 443 for communication with the Storage Gateway service.

When the instance is ready, note the public IP address.

Return to the Storage Gateway browser tab and click Next.

Select the Public endpoint and choose Next

On the Connect to gateway screen, Paste the public IP address of the file gateway, then choose Connect to Gateway.

On the Activate Gateway screen, give the gateway a name: I chose “File Gateway Appliance”. Choose Activate Gateway.

It takes a few seconds to configure the local disk to use for upload cache storage.

On the Configure local disks screen, you may get a warning that the Recommended minimum disk capacity is not met. It is recommended to be at least 150GB. This disk is used for upload cache.

Choose Configure Logging and Disable logging, Save and Continue.

When the status of the gateway is Running:

Choose Create File Share, configure the name of the name of the bucket created earlier, choose NFS, then Next.

On the Configure how files are stored in Amazon S3 screen, select S3 standard and choose Next.

A role will be created for the gateway to access the bucket.

On the review screen, you may see a warning that the share will accept connections from any NFS client. To limit the share to certain clients, choose Edit next to Allowed clients, and edit the CIDR block. I used 172.31.0.0/16 as the client is in the default VPC. Choose Create file share and wait for it to change to be available. This takes about a minute.

Select the share, and you will see the command to mount the file share on Linux, similar to this:

mount -t nfs -o nolock,hard 172.31.42.177:/filegateway-smc [MountPath]

Log in to the NFS client and create a directory to use to sync data with the bucket.

sudo mkdir -p /mnt/nfs/s3

Mount the file gateway file share using the mount command displayed earlier, replacing the mount path with the path above, for example:

sudo mount -t nfs -o nolock,hard 172.31.42.177:/filegateway-smc /mnt/nfs/s3

If this fails, it is likely to do with the security group rules created earlier.

Verify the share has been mounted:

df -h

Create a file, for example nano file1.txt and add a line “This is file1.txt”

copy it to the mount path.

cp -v file1.txt /mnt/nfs/s3

Verify the file appears in the bucket. For me, it appeared either immediately or within a few seconds.

Try copying some more files, editing a file, and deleting a file.

To clean up:

Be careful of the order, I have not tried doing it in a different order, but I would not be surprised if there were problems.

Select the Storage gateway, Actions, Delete gateway. It is removed immediately.
From the EC2 console, terminate the File Gateway Appliance, as it is not terminated automatically.
Terminate the NFS Client instance.
Delete the S3 bucket.