AWS Fargate: First hands-on experience and review

Remy DeWolf
7 min readDec 5, 2017

On November 29, 2017, AWS announced Fargate with the following promise:

This article will guide you in setting up a simple HTTP server with AWS Fargate. From this point, the infrastructure will be scaled using Auto-Scaling.

Based on this initial experience, we will assess the delivery of this technology’s original promise and reflect on potential use cases going forward.

Deploy an application using Fargate

The fastest way to get you started is to go to AWS Console and run the new first-run wizard for Fargate. Make sure you are in the us-east-1 (N. Virginia) Region. Fargate is only available for that region at this time.

The first screen comes up with a pre-configured sample application based on the http Docker image.

Select the sample-app and click next

The next step is to decide if we want to use an Application Load Balancer. With an ALB, we could register multiple container instances for the service, therefore setting us up for Auto-Scaling.

Select Application Load Balancer and click Next.

Even though Fargate containers don’t need EC2 instances, they still need to be registered to an ECS Cluster.

Set the cluster name to fargate and click next.

The next screen gives an overview of the components to be created: Task Definition, Service and Cluster.

Click Create to create the cluster.

The AWS resources are being created. It took 4 min for this run.

Under the hood, this is a cloudformation stack that is executed. The template can be exported and reused if you want to script this part, instead of going through the UI.

Once the Fargate cluster is created, let’s test that the container is up and can be reached through the ELB.

Click on View Service

The service lists the Target Group used for Load Balancing.

Pro Tip: If you plan to run some containers through the cli (aws ecs run-task) you will need the subnets and securityGroups that are listed on this screen.

Click on the target group to navigate to the Load Balancer

From the target group, we can navigate to the Load Balancer.

Click on the Load Balancer name

The DNS name can be copied from this page. This is the public hostname for the Load Balancer.

Now, let’s test by connecting to the copied URL. The load balancer redirects the HTTP request to the container, which returns the HTML page. Success!

Scale the service by adding more containers

Now, that we have one container, how can we add more container instance to our service?

First step, let’s go back to the ECS service. The desired count and running count are set to 1. We want to bump this number to 2.

Select Update to update Desired Count

Let’s set the number of tasks to 2.

Click on Next step

Click Next at each screen, until the Auto Scaling screen shows up.

Select Auto Scaling and set the maximum number of tasks. For this example, we limited to 2 to prevent accidental auto-scaling.

Click Next step
Confirm by clicking on Update Service
Once Service Auto Scaling is created click on View Service

Now the service has a Desired Count of 2, but the Running count is still 1. Let’s have a look at the Tasks tab.

Select the Tasks tab

The newly created task goes from PROVISIONING to PENDING to RUNNING. It took 3 min during this test to reach the RUNNING state.

If we go back to the Target Group, there are now 2 instances registered to the ELB.

So we have 2 registered targets on our Load Balancer but there is no instance of EC2 running. How is it possible?

If you check the Network Interface, you will find that for each container task, an entry created.

3 network interfaces were created: 1 for the ELB, 2 for the container tasks

Review of AWS Fargate

Does it live up to the promise?

Run containers without managing servers or clusters

50% delivered: there is no EC2 instance, but there is still an ECS cluster, and the all the infrastructure that is needed to have an ECS cluster.

Not managing server?

Generally speaking, managing the EC2 instances can be a burden, there are so many instance types, and there is not simple rule to set up the auto-scaling efficiently, it’s usually a process of trial and error overtime.

With Fargate, you can’t set the EC2 instance type, this can be an issue for some use cases. Some applications would be optimized when running on a given type of instance. For example, P3 instances are suitable for Machine/Deep learning.

Still managing the cluster?

At the end of the day, Fargate is a pure ECS solution, with the exception of managing the EC2 instances. If you have never done ECS cluster management or come from Kubernetes, you might be confused with the notion of cluster, tasks, service, container definition.

How can we make it better?

ECS is complex. AWS is going in the right direction by trying to remove some layer and simplifying it.

But why not go all the way and make it plain simple?

Here is suggestion for this, extend the notion of container service and provide a one-screen setup with the following parameters:

Input:

  • Service Name: my-sample-app
  • Docker Image: http:2.4
  • Memory: 0.5GB
  • Cpu: 0.25 CPU (256)
  • VPC: Drop-down list. (if there is none, provide a one-screen UI for this)
  • Subnets: List of subnets where the container would be hosted (public/private is derived from this choice)
  • Desired count: number of container to start
  • Load Balancer: yes/no (if yes, ask for port)

Output:
- Load Balancer URL
- List of container IPs

The VPC creation is right now bundled into the Fargate wizard, this could be separated for clarity. Most likely, different set of users would be configuring the VPC and deploying containers.

With this solution, we could remove the notion of cluster entirely. Merge the concept of Task Definiton, Container into the Service. The user would only manages Container Services.

While making it simple, it would lean toward a solution that is agnostic from the container orchestration tool.

Fargate still worth it?

If you have never setup ECS, Amazon does a great job with providing a wizard that would get you started quickly. From then, you can save the cloudformation template and configure it properly. Most likely you will want to revisit the security (security groups and carefully select which components have public IPs).

During the initial tests, there were a few times, it took a few minutes for a container to be started. Behind the scene, AWS has to find EC2 instances for you, which can introduce some latency, if you manage your own servers (without Fargate), you could always pre-provision some instances for this. If you plan to use ECS in production, I would recommend benchmarking Fargate vs EC2/ECS.

Given that it’s very easy to set up, Fargate would be a very good tool for prototyping applications. If you were thinking of running your CI on docker container, Fargate would be a good answer as you won’t have to manage autoscaling and cluster management.

Bottom Line

AWS Fargate removes a layer of complexity from the current ECS offer, but still uses the concepts of Cluster, Service, Task Definition and Container Definition. The new wizard for Fargate hides a lot of the complexity. If you are using AWS and don’t have a container orchestration tool, it would be the easiest way to get you started.

Because Fargate doesn’t let you choose the EC2 instance type, you might miss out on some optimizations. For high performance applications, benchmarking Fargate vs EC2/ECS Cluster would be recommended.

If you are using Kubernetes, hold on, AWS Fargate support for Amazon EKS will be available in 2018. Stay tuned.

--

--