Mount FSX into AWS ECS EC2 tasks

Learn
5 min readJun 2, 2023

--

I had a system of services running on AWS ECS EC2 instances, and I needed to create an FSX volume and mount the volume into the EC2 instances.

Create the FSX volume from the AWS Console UI
First, navigate the AWS Console UI and create the FSX volume.

How do I mount this volume on to EC2?
The AWS Console UI has an attach button which would print out the command that is needed to mount the FSX volume on to EC2.

Mount command does not work from EC2
The mount command is not working from the EC2 console. Why? These AWS resources are not visible to each other. They need to have networking established. FSX has to be reachable from EC2.

What establishes the networking credentials for EC2?
EC2 instances are being created from an Auto Scaling Group. This Auto Scaling Group has a networking section, which has the subnets specified on it.

What establishes the networking credentials for FSX?
FSX has a networking section which has the subnets.

Ensure that these networking coordinates can connect well
Use the reachability analyzer to check whether FSX can be reached from the EC2. Reachability analyzer is a tool that AWS provides to check whether one AWS resource can network with another AWS resource.

FSX ENI needs to use the same route table as the route table on EC2’s subnet.

Learning one — FSX volume that is created needs to be reachable from EC2
Fix the subnets on the FSX to use subnets that are reachable from the EC2 instances. Confirm reachability using reachability analyzer.

Given that the networking is fixed, try mounting once more
Mounting still times out. Why would that be?

NFS protocol uses 2049 port. SSH protocol uses 22 port
Opening up these ports on FSX needs to be done. Specifically, EC2 instances on the ASG needs to be able to communicate with the FSX volume on these ports.

Security group is the abstraction that captures inbound traffic permission from various ports
Security group is a group of something, something related to security. It is a group of rules — inbound rules, and outbound rules. Inbound and outbound to what? To the security group. Security group is just a lego piece, sitting in air.
And it attains meaning when tied to the AWS networking resource. In this example, the FSX. So, add the security group on to FSX which allows inbound traffic to port 22 for SSH access, and to port 2049 for NFS protocol.

SSH access to FSX is needed for what?
SSH access allows administerning FSX from the EC2 instance.

NFS protocol
This is the Network File System protocol that allows a user to access a file across network as if it is a local file. 2049 is the default port for NFS protocol. That is the reason, the security group needs to have an inbound rule for 2049 port.

CIDR
Classless Inter Domain Routing is the expansion. It is a notation for the IP addresses in a subnet. e.g. 172.2.1.20. The reason CIDR came into picture is that it is used to specify the IP range from which to allow access to the ports.

FSX security group needs to be created in the same VPC as the EC2 instance.

Learning two — establish the security group to open up ports needed for communication between the AWS resources

One issue I ran into while setting this up was that the mounting would work from one EC2 instance, but not the other from the same Auto Scaling group. The reason for this was that the security group was only partially set up — meaning the CIDR for only the working EC2 instance was allowed in the security group. Fix of course was to add another inbound rule for the missing CIDR.

Automatic mounting of FSX for new instances
Manual mounting of FSX mounts is good. However, Auto Scaling Group is about instances potentially going down, and new instances automatically coming up to take it’s place. How would the new instances automatically get the FSX volumes mounted?

UserData in Auto Scaling Group
Auto scaling group needs to spawn up new instances. To spawn up new instances, there needs to be a specification of what kind of AMI to use to bring up the EC2 instance, and other such details. That is where a Launch Template comes in.
Launch template is the specifications for the EC2 instances to come up. As part of this, there is a User Data section which is where we can specify actions that needs to happen on all instances as they come up. Does that not sound like the right place to add the mounting instruction?
Yes, indeed. That is the right place. Add the mount instructions to the userData section and now future EC2 instances would automatically have the FSX volume mounted.

Pre-requisites for mounting
Reminder again that the FSX needs to be reachable from the Auto Scaling Group, and the security group needs to be configured on FSX to allow inbound traffic.
These pre-requisites ideally needs to be built into the Terraform/Cloudformation scripts that are being used to spawn the Auto Scaling Group and FSX.

Mounting from EC2 instance into the application container
This is just the first section of the story. Now, the FSX volume needs to be mounted into the container. This can be done using CloudFormation by using bind mounts. There are two sections — volumes and mountPoints that are relevant for this.
Volume specifies the name of the volume. And the mount point specifies the directory in the container into which the volume needs to be mounted.The third data point that is needed for the bind mounting to function is the host path . This is the path on the EC2 instance where the volume has been mounted, and this mount is then bound to the container at the container path
Mount point is essentially a declaration to mount the volume vol1 from the directory host path on the host instance (host in the sense that this EC2 instance is hosting the container) into the container at the container path

Verifying that the mounting actually worked
In my scenario, I had containers for different applications mounting the same volume. I would log into the EC2 instance, and create a file in the volume, and see the file reflect in all the container paths by logging into the tasks.

--

--