AWS net.ipv4.tcp_tw_recycle follow-up

Yesterday I wrote a post on AWS EC2 instance networking problem that I was pretty surprised to find out. And while yesterday I was focusing on fixing the problem, today my first task was to find out what actually sets the flag, and quick grep on /etc of the instance revealed that the settings were applied by /etc/sysctl.d/net.ipv4.tcp_tw_recycle.

Very strange to find it there along with net.ipv4.tcp_tw_reuse which is also something that you should not touch. Anyhow, the problem identified, fixed and is about to be added to monitoring…

Amazon AWS, WTF?

Spend the whole day troubleshooting a problem of some pretty random, but stable tcp connection timeouts to one of the Amazon AWS EC2 instance. The problem was that some PCs/laptops/servers would face long term connection timeout to the instance, while others were working fine. The ones with timeouts would experience problems only on TCP level, while ICMP ping would pass normally. The other strange thing is that rebooting client to different kernel would fix the problem for that particular client for a while.

After checking and googling with no luck and getting completely pissed off, I gave the problem another thought and this time I felt that something is wrong with AWS NATting. That clearly brought the memories of troubleshooting TCP fine tuning. So I checked the article, found out the values to make sure are present and went to check the actual instance. Quick look into /proc/sys/net/ipv4/tcp_tw_recycle revealed the problem with its value being 1, so changing it back to 0 with cat to apply immediately fixed the connectivity issues, but then, when I looked into /etc/sysctl.conf, I saw that the value there was already 0!!! How come is it possible if we didn’t change it manually via proc, nor have we touched sysctl.conf for ages and the last server reboot was only few days ago done by Amazon due to their planned maintenance?

 

Cannot allocate shared memory and kernel.shmmax, kernel.shmall

Ok, this one is short but cool. After server update and reboot I noticed the zabbix-proxy (and later on found out that zabbix-agent also) didn’t start up. Running service zabbix-proxy start gives you OK, but status check later tells that non-OK and service is stopped.

Quick look into zabbix-proxy log file shows the following:

cannot allocate shared memory of size 16777216: [22] Invalid argument

Hmmm… Checking system memory and looking around I didn’t notice any problems or lack of resources, so a big of googling pointed me to check kernel’s shared memory configuration

]# sysctl -a | grep shmmax
kernel.shmmax = 0

And here 0 doesn’t mean unlimited, but literally zero! Ok, fine, but what’s in /etc/sysctl.conf?

 # Controls the maximum shared segment size, in bytes
kernel.shmmax=68719476736

Don’t ask me why the value is exactly what it is, it was there historically. Anyway, this is wrong, as it is bigger than 0. We need to change it!

]# echo 68719476736 > /proc/sys/kernel/shmmax 
]# sysctl -a |grep shmmax
kernel.shmmax = 0
]# sysctl -w kernel.shmmax=68719476736
kernel.shmmax = 68719476736
]# sysctl -a |grep shmmax
kernel.shmmax = 0

WTF? WTF? WTF? (retvals from my brain doing and seeing above). Again a bit of googling, and here we are:

  • After setting kernel parameter SHMMAX to a value larger than 4GB on a 32-bit Red Hat Enterprise Linux system, this value appears to be reset to 0.

Checking that the server has 2GB RAM, changed the value of shmmax to 2147483648 and repeating the above all worked out as expected with value being applied. Restarting my zabbix services, checking again, still no luck with a slightly different message this time:

cannot allocate shared memory of size 16777216: [28] No space left on device

Seriously?! Checking /etc/sysctl.conf one more time, I found that kernel.shmall has a bit value there as well, but 0 in real life. Adjusting it to match kernel.shmmax and restarting the services worked this time.

It’s a pity that RedHat knowledge base doesn’t make a hint about it, as the problem is common for SHMMAX and SHMALL.