Application Load Balancers are an incredibly useful component to use as a building block on AWS – they can provide not just load balancing, but also allow you to do super simple TLS termination (where you decrypt HTTPS traffic on the load balancer, reducing the load on your servers) or put a WAF (Web Application Firewall) in front of your servers to provide additional protection. One thing AWS doesn’t currently offer however is static IP support for ALBs. I’m going to talk here about how you can achieve this using a relatively new service – AWS Global Accelerator.

ALBs and DNS

When you create a new ALB, you get given a DNS name for it that looks a bit like this: my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com.

If you run nslookup on Windows to find out the IP address that this domain points to, you’ll get 2 IPs back. You may remember when you created your load balancer that AWS made you pick 2 separate availability zones for its deployment – this is to make the actual ALB itself more resilient to failure.

AWS doesn’t expose these IPs to you by default, such as through the console, and that’s because you’re not supposed to use them directly. If you were to point your own custom DNS name http://www.example.com at the IPs, you’ll find that eventually this will stop working. This is because if AWS decide to move some stuff around, or hardware fails that is supporting your ALB, your endpoints will get new IP addresses. In the worst case, I’ve seen this happen repeatedly in the course of a day so it’s not something you can just bank on never happening.

To get around this issue, the DNS name that AWS give you (eg my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com) is what is known as an alias record – when the IPs change, AWS will update that DNS record automatically for you so that it always points to the correct place. If you want to point a custom domain at your ALB, you don’t create an A record that points at the IPs – instead you create a CNAME record that points at the AWS-provided domain.

Why would I want static IPs for my ALB?

There are a variety of reasons you might want to use static IPs for your ALB, but the most common one is for enabling firewall rules for traffic going to your application sat behind the ALB.

Imagine you build an application on AWS that collects data from your customers’ networks. Your customer wants to define strict firewall rules that ensures that the traffic leaving their network sent by your application can only flow to a specific destination. This is fairly common, especially in more locked down environments. If you have a firewall that can create rules based on DNS names (“allow traffic outbound to http://www.example.com“) then you’re good to go, but this is by no means guaranteed.

Instead, you’ll have to create a rule that says “allow traffic outbound to 1.2.3.4” – but we’ve just said that this IP changes regularly. You can’t ask your customer or end user to repeatedly check and update their firewall rules to take into account your service’s changing IP – you want to give them an IP that always stays the same.

How do I do this?

AWS do publish a tutorial blog post on how to put static IPs in front of an ALB but, whilst clever, it is frankly an absolutely hideous solution to the problem and getting it up and running is not simple.

At re:Invent 2018, AWS released a new service called Global Accelerator. To be honest, it’s not immediately obvious how this will help – it’s a service designed to help reduce network latency for your users by getting their traffic into the AWS Global Network as quickly as possible. The rationale behind this is that the backbone network that AWS own and operate globally – explained in great detail on their fantastic infrastructure.aws page – is less congested and more reliable than utilising the regular, peasant-tier public networks that comprise the wider internet.

Crucially, when you deploy a Global Accelerator, you receive a pair of static IP addresses as your “entry point” into the AWS global network. Global Accelerator also needs you to define “endpoints” which are where you want the traffic to end up once it’s travelled through the AWS network – your application. Conveniently, Application Load Balancers are one of the endpoint types that Global Accelerator supports, meaning that if you front your ALB with Global Accelerator then you receive all the benefits of your ALB but now with static IPs!

This diagram from AWS visualises how traffic flows from a user to your application via GA and your ALB.

How does this work?

As an interesting aside, the static IPs you get through Global Accelerator are what’s known as “Anycast IP addresses”, meaning that lots of AWS edge locations across the internet announce themselves to ensure traffic takes the fewest possible hops before entering the AWS network. What this means in practice is that if one of your static IPs is 1.2.3.4 for example, then all of AWS’ edge locations across the globe shout out “hey, I’m 1.2.3.4” and your traffic just goes to the closest one.

The users of your application will find the that their traffic leaves their network and very quickly enters the AWS network – you can verify this yourself with tracert to see how quickly this actually happens, as the hostnames make it obvious when you’re inside the AWS network. Now that the traffic is inside the AWS network, it flows over their backbone to the region where your ALB resides. Your ALB can then process this traffic as normal, just as if it had been hit directly from the internet.

How do I implement this?

The actual deployment of this is much simpler than you may expect. I’m going to assume that you already have an ALB deployed that’s set up to use a custom domain like http://www.example.com. This means you’ll have a DNS record defined that points http://www.example.com to the DNS name of your ALB, which will look something like my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com.

Next up, you’ll want to deploy a new Global Accelerator. This literally takes about 5 minutes and is very simple – follow the Getting Started guide AWS have here.

When you get to Step 4 – Add Endpoints, configure your GA to point at the ALB you already have defined. When it’s finished deploying, you’ll get two anycast static IPs – let’s say 1.2.3.4 and 5.6.7.8.

Finally, update your custom domain’s DNS record to point to these new IPs instead of the ALB’s DNS name and traffic will begin to flow through GA to your ALB when the changes have finished propagating. Now you have a pair of static IPs in front of your ALB which your end users can use to define firewall rules etc.

Caveats

There are a few caveats to deploying Global Accelerator in front of your ALB which you should take into account before deciding if it’s the right solution for you.

  • As you may expect, it’s not free. You pay $18 a month for each accelerator and then per GB pricing for the traffic you send through it. Overall, the pricing isn’t hugely different to if you were running a second load balancer (like the janky ALB/NLB solution AWS suggest).
  • There is a hard limit to the number of Global Accelerators you can deploy per AWS account. I think this is currently 20, but since it’s a hard limit, AWS will not increase this for you.
  • Because the load balancer sees traffic coming from the Global Accelerator endpoints, all of your incoming traffic will appear to originate from one of the static IPs associated with your GA. This may be an issue if you restrict incoming traffic by IP. If this is essential for you, you can achieve this with the ALB/NLB Frankenstein approach, using Network ACLs on your NLB subnets to restrict traffic at that level.

Hopefully this is of use to some people – it was a revelation for me when I worked out how these could be combined and it might help you solve some of your issues until AWS eventually get around to adding static IP support to the ALBs.