Amazon AWS, WTF?

Spend the whole day troubleshooting a problem of some pretty random, but stable tcp connection timeouts to one of the Amazon AWS EC2 instance. The problem was that some PCs/laptops/servers would face long term connection timeout to the instance, while others were working fine. The ones with timeouts would experience problems only on TCP level, while ICMP ping would pass normally. The other strange thing is that rebooting client to different kernel would fix the problem for that particular client for a while.

After checking and googling with no luck and getting completely pissed off, I gave the problem another thought and this time I felt that something is wrong with AWS NATting. That clearly brought the memories of troubleshooting TCP fine tuning. So I checked the article, found out the values to make sure are present and went to check the actual instance. Quick look into /proc/sys/net/ipv4/tcp_tw_recycle revealed the problem with its value being 1, so changing it back to 0 with cat to apply immediately fixed the connectivity issues, but then, when I looked into /etc/sysctl.conf, I saw that the value there was already 0!!! How come is it possible if we didn’t change it manually via proc, nor have we touched sysctl.conf for ages and the last server reboot was only few days ago done by Amazon due to their planned maintenance?

 

Cannot allocate shared memory and kernel.shmmax, kernel.shmall

Ok, this one is short but cool. After server update and reboot I noticed the zabbix-proxy (and later on found out that zabbix-agent also) didn’t start up. Running service zabbix-proxy start gives you OK, but status check later tells that non-OK and service is stopped.

Quick look into zabbix-proxy log file shows the following:

cannot allocate shared memory of size 16777216: [22] Invalid argument

Hmmm… Checking system memory and looking around I didn’t notice any problems or lack of resources, so a big of googling pointed me to check kernel’s shared memory configuration

]# sysctl -a | grep shmmax
kernel.shmmax = 0

And here 0 doesn’t mean unlimited, but literally zero! Ok, fine, but what’s in /etc/sysctl.conf?

 # Controls the maximum shared segment size, in bytes
kernel.shmmax=68719476736

Don’t ask me why the value is exactly what it is, it was there historically. Anyway, this is wrong, as it is bigger than 0. We need to change it!

]# echo 68719476736 > /proc/sys/kernel/shmmax 
]# sysctl -a |grep shmmax
kernel.shmmax = 0
]# sysctl -w kernel.shmmax=68719476736
kernel.shmmax = 68719476736
]# sysctl -a |grep shmmax
kernel.shmmax = 0

WTF? WTF? WTF? (retvals from my brain doing and seeing above). Again a bit of googling, and here we are:

  • After setting kernel parameter SHMMAX to a value larger than 4GB on a 32-bit Red Hat Enterprise Linux system, this value appears to be reset to 0.

Checking that the server has 2GB RAM, changed the value of shmmax to 2147483648 and repeating the above all worked out as expected with value being applied. Restarting my zabbix services, checking again, still no luck with a slightly different message this time:

cannot allocate shared memory of size 16777216: [28] No space left on device

Seriously?! Checking /etc/sysctl.conf one more time, I found that kernel.shmall has a bit value there as well, but 0 in real life. Adjusting it to match kernel.shmmax and restarting the services worked this time.

It’s a pity that RedHat knowledge base doesn’t make a hint about it, as the problem is common for SHMMAX and SHMALL.

BitBucket push surprise

Today I was very surprised and even scared by a response from pushing my changes to a repository hosted on BitBucket.

Not that I am against of people celebrating whatever they want, changing site logos and whatsoever, but this way too much, especially in such types of the tasks. When you do a lot of pushes all the time, you mind is getting used to recognise patterns in the response messages act accordingly, but this one screws the mind completely. To make it simplier: imaging the traffic lights would change colours from time to time to for similar occasions…

Amazon AWS Subnet Custom Gateway

While Amazon provides different ways to route traffic within and out of your subnets by means of internet gateways and NAT gateways, it’s not always the case that you it will suite your needs. If you want full control with lots of possibilities for customisation, you might consider building your own firewall instance and push all traffic via it.

Amazon provides NAT instances, but they also have some limitations, so to gain full features, its is possible to build a custom EC2 instance with whatever AMI and settings you like, attach two network interfaces to it where one is in a private and one in a public subnet and do classic iptables NAT on it.

For all instances in the private subnet to be routed via your custom firewall, you need to adjust the routing table for that subnet and point default route to the network interface of the firewall instance that is in that same private subnet.

All of the above works pretty good, but looks a bit weird: lets assume the following:

  • We have a VPC 10.10.0.0/16
  • We have a private subnet 10.10.10.0/24 within our VPC
  • We have a public subnet 10.10.20.0/24 within our VPC
  • We have a firewall instance with:
    • public subnet IP: 10.10.10.10
    • private subnet IP: 10.10.20.20
  • We have a host in private subnet with IP 10.10.10.50

Now, the first question is why we don’t use 10.10.10.1 on firewall instance in a private subnet? Easy: it is used by amazon gateway and even though we have created routing table to through all to 10.10.10.10, on an actual host in a private subnet, the routing table will be:

default via 10.10.20.1 dev eth0 
10.10.20.0/24 dev eth0 proto kernel scope link src 10.10.20.50 

This means that host will send traffic to AWS gateway, and that one will pass it over to our firewall. Probably the idea behind such configuration is that AWS still needs to check security groups and so on, before it handles traffic to us.

Cool, and this work fine, but there is a small issue: if you will try to ping or access any services at firewalls public IP (10.10.10.10) from the host in your private subnet – you will fail! Moreover, if you fire up tcpdump on a firewall server listening for any packets from host in private subnet via interface of private subnet (10.10.20.20) and try to ping 10.10.10.10 from the private host – you will see completely nothing related to this. Nor you will see any other activity from your private host towards public IP address of your firewall.

Wanna go even more weird? If you will try to access any other hosts in the public subnet from your private host (for a sake of example assume you have another host with IP 10.10.10.99 and you try to ping it from 10.10.20.20): this will work as expected and traffic will flow via firewall as configured.

Not sure why and how, but Amazon seems to block access on 10.10.20.1 from any host in 10.10.20.0/24 network to 10.10.10.10, because that IP belongs to a firewall that is a default gateway for any host on 10.10.20.0/24 (even though the IP is another subnet).

The solution for this problem (if that’s a problem for your case) is either to put a direct route to 10.10.10.10 via 10.10.20.20 on the private host to make sure private host avoids using amazon 10.10.20.1 for this route:

default via 10.10.20.1 dev eth0 
10.10.10.10/32 via 10.10.20.20 dev eth0
10.10.20.0/24 dev eth0 proto kernel scope link src 10.10.20.50

or even completely ignore 10.10.20.1 and set default gateway via 10.10.20.20:

default via 10.10.20.20 dev eth0 
10.10.20.0/24 dev eth0 proto kernel scope link src 10.10.20.50

In either way you won’t be able to do it via AWS routing tables, but will have to configure routing right on the private host via ip route tool or route-ifcfgN file for persistence.

Keep in mind that if you will divert all traffic via 10.10.20.20, you might lose some security group checks within a subnet, so make sure to implement whatever security you need on the actual firewall.

Fixing very outdated Let’s Encrypt

Following my brothers post about Fixing outdated Let’s Encrypt which is pretty useful when sorting out the SSL stuff on the servers, I run into the problem when even given solution won’t help you will still receive a message about missing zope.interface like in initial post.

Luckily, the comment from @skatsumata on github proposes a working solution:

# pip install pip --upgrade
# pip install virtualenv --upgrade
# virtualenv -p /usr/bin/python27 venv27
# . venv27/bin/activate

After the above is done, you still need to re-init Let’s Encrypt as per my brothers post:

# rm -rf /root/.local/share/letsencrypt
# /opt/letsencrypt/letsencrypt-auto --debug

And then renew the certs as you normally do.