Recently I came across T-Pot, an “all in one honeypot platform” published on Github by T-Mobile’s security division. I’ve played around with honeypots before but they were often tricky to set up and extract meaningful data from – so the prospect of a straightforward way to gather a lot of data was interesting. This post outlines what a honeypot is, why you might want to run one and how to go about deploying it on AWS.

What is a honeypot?

Put simply, a honeypot is a system that appears to be an attractive target for an attacker or automated scanner but which in actual fact is monitored and used for gathering intelligence about attacks, or as an early warning system to notify you about a potential breach or compromise.

For example, an SSH honeypot may present to the internet a facade of being an unprotected SSH server, but in actual fact it’s collecting information about where SSH brute force attempts are coming from, and what credentials attackers are trying to use to gain entry.

Why run a honeypot?

Wikipedia suggests that there are two main types of honeypot – production and research.

A production honeypot may be placed inside a production network to act as an early warning system that could indicate a compromised network or someone snooping where they shouldn’t be. This doesn’t have to be a fully fledged VM or selection of Docker containers running spoof services – it could be as simple as honeypot files, made easy with a free service like CanaryTokens.

A research honeypot could be used to gather threat intelligence about tactics and attack patterns, helping to determine if there are vulnerabilities facing mass scanning or IP ranges that are consistently performing attacks.

Deploying T-Pot on AWS

The actual deployment of T-Pot is surprisingly straightforward and is mostly automated.

Network

Unless you’re explicitly trying to deploy a honeypot inside an existing VPC on AWS to act as a production-type honeypot, you’ll likely want to create a whole new VPC for it. This lets you ensure the instance running it is isolated from the rest of your resources by default and reduce the risk if the honeypot itself is somehow compromised.

You can easily create a new VPC by going to the VPC console and clicking the “Launch VPC Wizard” button where you can deploy a “VPC with a Single Public Subnet”. This is quite straightforward, needing only a name and a CIDR block which you can just leave as the defaults if you’re not sure.

Instance

Next up is selecting an instance and deploying that so you can SSH in and deploy T-Pot. There are plenty of generic tutorials about deploying an EC2 instance so I won’t describe the whole thing, but the key points are:

  • When asked what AMI you want to deploy, use the latest Debian 10 (Buster) release for your region. Debian helpfully document their AMI IDs here.
  • Use an instance that is well-specced enough to run T-Pot comfortably. Unfortunately this isn’t a cheap endeavour, with the cheapest one you can go for probably being a t3.large. Be sure to check the pricing for your region but expect in the region of $60/month for compute alone – and you’ll need a decent sized SSD along with that.
  • When configuring your instance details, be sure to select the VPC you made for your honeypot in the previous step, and ensure you auto-assign a public IP so it’s actually accessible from the web.
  • When configuring a security group, you want most ports to be wide open to the internet to ensure you can actually collect data – this is necessary despite being obviously not a best practice. Create the following rules
    • Allow TCP access across ports 0-64000 from everywhere (0.0.0.0/0)
    • Allow TCP access to port 64295 (the actual SSH daemon once you’ve got T-Pot deployed) from your own IP.
    • Allow TCP access to port 64297 (the web UI once you’ve got T-Pot deployed) from your own IP.

Installation of T-Pot

Once you’ve got an instance deployed, you can SSH in and begin the deployment process – there aren’t too many steps luckily.

Ensure all the OS dependencies are up to date by running:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y git

Clone the T-Pot git repository onto your instance and then run the installation process:

git clone https://github.com/dtag-dev-sec/tpotce.git
cd tpotce
./install.sh --type=user

This will run through an installation routine that will take around 15-20 minutes. You’ll be asked what type of deployment you want to go for – at the time of writing, the NextGen type pulls in the newest release so that’s what I went for. You’ll also be asked for initial credentials which you’ll use to connect to the web UI when it’s up and running. The installer will automatically reboot your instance when it’s run so you’ll get kicked out of your SSH session.

When the AWS console says it’s back up and running, you’ll be able to reconnect over SSH, but this time SSH will now be running on port 64295 because port 22 is running an SSH honeypot.

There is also an important final step if you’re deploying on AWS. There are several issues on T-Pot’s Github noting that AWS deployments seem to die after an hour or two by themselves, and this seems to relate to the fact that the T-Pot install script disables IPv6. This can be fixed by re-enabling it manually.

Open the file /etc/sysctl.conf in an editor like nano as root so you can save changes to it.

sudo -s
nano /etc/sysctl.conf

Comment out the following lines which should be at the end of the file with the # character so it looks like this:

#net.ipv6.conf.all.disable_ipv6 = 1
#net.ipv6.conf.default.disable_ipv6 = 1
#net.ipv6.conf.lo.disable_ipv6 = 1

You’ll then need to restart the networking service to have this take effect with:

sudo /etc/init.d/networking restart

You should now be able to hit the web UI at https://<Instance Public IP>:64297 and log in with the credentials you set up at install time to view the dashboards (you’ll need to click through to Kibana to see this if you’ve installed the NextGen version).

Adding a LetsEncrypt Certificate

This step is optional but it’s nice to have if you’ve got a domain name pointed at the IP for your instance.

Firstly, we’ll need to install Certbot which is built by the EFF specifically for obtaining and renewing LetsEncrypt certificates:

sudo apt-get install certbot

I find the easiest way to obtain a certificate when you’re not hosting a site on a standard port (here port 80 is being used by another honeypot) is to use the DNS challenge to verify ownership, where you assert that you control a domain by creating a specific DNS record. The following command will initiate a request for a new certificate, and will tell you what DNS record you need to create to issue a cert for your domain honeypot.example.com.

certbot -d honeypot.example.com --manual --preferred-challenges dns certonly

If you follow through the instructions for this tool, you’ll need to create a TXT record for _acme-challenge.honeypot.example.com with a value they provide. When this passes, you’ll be issued a certificate. If this works, the tool will tell you where it’s put them – for me this is:

# The certificate file
/etc/letsencrypt/live/honeypot.example.com/fullchain.pem

# The private key file
/etc/letsencrypt/live/honeypot.example.com/privkey.pem

We can then move these to the correct locationfor the server to pick them up – which in this case is a Docker container called nginx. We’ll first create backups of the existing cert and key, and then move across our new ones.

# Back up the old cert
mv /data/nginx/cert/nginx.crt /data/nginx/cert/nginx.crt.old
mv /data/nginx/cert/nginx.key /data/nginx/cert/nginx.key.old

# Move across the new LetsEncrypt one
cp /etc/letsencrypt/live/honeypot.example.com/fullchain.pem /data/nginx/cert/nginx.crt
cp /etc/letsencrypt/live/honeypot.example.com/privkey.pem /data/nginx/cert/nginx.key

Finally, lets restart the nginx container so it picks up the new certificate files:

docker container restart nginx

You should now be able to browse to https://honeypot.example.com:64297 and find a valid SSL certificate!

Cost Savings

If you’re intending to run this long term on AWS, you have a couple of options to reduce the cost of this. Previously your best bet was to purchase a Reserved Instance for the instance you want to use, but this ties you in to this exact instance type in a specific AZ. Instead, look at an EC2 Instance Savings Plan which provides more flexibility with what you’re paying for in exchange for an identical saving. You get the same payment options, including No Up Front, but you’re committing to a specific spend across an instance family regardless of AZ or OS you use, so it’s substantially more flexible. In eu-west-1 this represents a saving of 37% over the on-demand pricing so it’s definitely worth looking into if you intend to run this longer term.

What Now?

Now that you’ve got a honeypot up and running (in actual fact many honeypots – take a look through the list of honeypots that are included in a T-Pot deployment on their Readme), what next? Leave it alone for a few hours or watch it live as attacks roll in – you may be surprised with just how many you get in a short space of time.

You’ll be able to spot some trends fairly quickly – at the time of writing, a huge majority of my traffic appears to be SMB scanning on port 445 – potentially actors seeking to exploit vulnerabilities like EternalBlue. You can also get some insights into the username and password combinations that bots are trying for SSH brute forcing:

I’m hoping to keep this honeypot going for a little while, and I might tweet about any interesting findings I stumble across.