HAProxy abuse filtering and rate limiting

Just recently Nginx rate limit by user agent (control bots) which is all cool and handy, but what if you have a number of Nginx behind HAProxy and want to offload some the job to it? Fortunately HAProxy is very easy on configuration and very flexible on ACLs. Here is a simple example on how to do different blacklists and rate limiting (just a part of configuration, apply where appropriate):

frontend http *:80
 description Incoming traffic to port 80
 # IP address white/blacklist
 tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
 tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }

 # Max possible time delay on inspection
 tcp-request inspect-delay 15s

 # ACLs for blacklist UAs and Paths
 acl abuse_ua hdr_sub(user-agent) -f /etc/haproxy/blacklist_ua.lst
 acl abuse_path path_beg -f /etc/haproxy/blacklist_path.lst

 # Reject blacklisted UAs and Paths
 tcp-request content reject if abuse_ua
 tcp-request content reject if abuse_path

 # At most 10 concurrent connections from a client
 acl too_fast fe_sess_rate ge 10

 # Effectively working as a delay mechanism for clients that are too fast
 tcp-request inspect-delay 1000ms

 # Fast-path - accept connection if it's not this troublesome client
 tcp-request content accept unless too_fast

 # The very fast client gets here meaning they have to wait full inspect-delay
 tcp-request content accept if WAIT_END

Whenever you refer to a list file to search for a value from your configuration, make sure the file is actually in place (even if it’s empty), otherwise you will fail.

The only limitation for the above is that you can’t really check headers if you are using HAProxy SSL frontend with SSL SNI, by in that case you can still implement the limits on Nginx side. The fe_sess_rate limit though is still applicable.

One note that I forgot to mention in my previous post on Nginx rate limits, you can also adjust it to work based on requested paths, not only user agents.

P.S.: When dealing with configuration changes, make sure to check the validity of the config file after changes before restarting/reloading the service. You can do it with haproxy -f /etc/haproxy/haproxy.cfg for HAProxy or nginx -t for Nginx.

 

Nginx rate limit by user agent (control bots)

As search engine indexing bots are getting more and more intelligent and thus more aggressive, sometimes they become really annoying or even can affect the performance of the system.

While Nginx is a very powerful and flexible system, it is not always clear how to put all the configuration together to do the job. It is getting even harder when single Nginx server serves multiple virtual hosts and you want to apply the same policy for all of them from withing the http section of the configuration, instead of the server section of each site.

For any rate limiting, Nginx uses the ngx_http_limit_req_module module and it is pretty much strait-forward to limit based by IP address or any other simple value, but for advanced configuration you need to use maps with either static keys or regexp. That’s where things are getting more confusing, especially if you want to have some defaults and white-listing (exclusion from limiting) based on certain condition.

The Nginx documentation for rate limiting with regards to exclusions states:

The key can contain text, variables, and their combination. Requests with an empty key value are not accounted.

Sounds easy, but to find the correct way to implement such an exclusion took me quite some time of googling, reading, trying failing, googling again and so on and so forth. So just to have a correct solution documented somewhere closer to me, will just cover it here.

For the clarity, and more extended solution, let’s assume we want to limit user agents that match (GoogleBot|bingbot|YandexBot|mj12bot) pattern to 1 request from IP per minute burstable to 2 and the rest of the world to 10 requests per IP per second burstable to 15. To do this, the http section of the nginx.conf has to have the following part:

map $http_user_agent $isbot_ua {
        default 0;
        ~*(GoogleBot|bingbot|YandexBot|mj12bot) 1;
}
map $isbot_ua $limit_bot {
        0       "";
        1       $binary_remote_addr;
}

limit_req_zone $limit_bot zone=bots:10m rate=1r/m;
limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;

limit_req zone=bots burst=2 nodelay;
limit_req zone=one burst=15 nodelay;

The trick here is that we need to use two maps where first one sets a value 0 based on the $http_user_agent and the second one sets the empty value for everyone who got in the first map, but $binary_remote_addr for the ones who got value 1 in the same first map. The idea is that in order for nginx whitelist the request from limit zone, the return key and value from the map have to be empty (0 for key and “” for value), so the first map sets 0 for the value, and the second map takes that value as it’s key and sets the “” value.

Rest of the configuration parameters are pretty much easy to understand and I won’t cover them here, since you can easily refer to nginx documentation.

To make things even nicer, we can also tell nginx to send a good HTTP 429 code (Too Many Requests) when someone is above the limit and hope that the requester will interpret accordingly and slow down. To make this, just add the following line in the same part of nginx configuration:

limit_req_status 429;

If you are using the limit_conn directive anywhere for nginx, you can add the same thing for it as well:

limit_conn_status 429;

Hope the above will save me and maybe even someone else some time and more similar posts to come later as I am getting my hands on different things.

P.S.: do you know the difference between “~” and “~*” for nginx configuration when dealing with regex? The answer is pretty simple: while the first one matches case-sensitive, the last one matches case-insensitive ;-)

Review: Moto 360 first generation

For the past few weeks I was thinking about writing two posts here about the technology, gadgets and different equipment that I’ve worked with for the last year or so. The first post should have covered general consumer electronics that I use in my personal life, and the second one I though to dedicate to cover more technical and professional things. While I thought to shortly describe all those nice things in the one or another post and later get into more detailed review of each item in a separate dedicated posts, I shortly realized that these two posts would become way too long, so finally I decided to create a separate Reviews category on my blog and just go with one-port-per-item approach.

So here is my first review of the first gadget in the series that I bought last December during my visit to Monaco: Motorola Moto 360 first generation smart watch.

Continue reading “Review: Moto 360 first generation”

TCP fine-tuning and its consiquenses

In a constant run to optimize resources all of us tend to find a way to fine-tune different aspects of the systems to achieve better server performance. While some people are getting satisfied with minor adjustments in the applications’ configuration, more advanced guys try to go as deep as possible and alter low level kernel flags to try and get the best of the best. That’s all kinda cool and fun, but as it is impossible know everything about everything we often ask google for advice, and that’s where some problem begin. There are bunch of howto’s out there on the internet with different solutions on improving performance, but most of them don’t go deep enough to show possible drawbacks of such solutions. As sysadmins a lazy people, RTFM or better to say RTF-RFC and getting all the internals is not in our habit, until something brakes. And that’s what I had to face yesterday.

My brother called me for help with his problem that he was trying to figure out for some time already, but it seemed that nothing could save him. The problem was pretty tricky and it was pretty well described in Leonid’s blog post, so I will get more on how it was rectified:

The troubleshooting started with simple things and went all the way down to tcpdump. Given the fact that his office server could successfully communicate with Amazon server, but non of the desktops/laptops behind the office server couldn’t, the fun begin.

– traceroute shows that we have proper connectivity and routing in place.
– tcptraceroute shows problems, netcat confirms the problems with connection timeout
– iptables rules look fine on all the parties
– logs on the Amazon server do no show anything useful

My first thought was: WTF! And when we have WTF related to connectivity, we use tcpdump.

The tcpdump shows that:
– initial TCP SYN packet successfully leaves office laptop
– the same packet successfully passes the firewall coming to LAN interface and leaving the WAN interface
– and the TCP SYN packet successfully arrives to the destination Amazon server, but
– there is no TCP SYN-ACK reply ever leaving Amazon server towards office

In case when we leave office laptop alone and try to do the same thing from the office server, both TCP SYN successfully reaching the Amazon server and we have TCP SYN-ACK as well as any following TCP packets successfully traveling between the communicating nodes.

After we have all of the above info gathered, the problem was localized to Amazon server and the question was as simple as: why Amazon server is not replying with TCP SYN-ACK to the office laptop, while it does reply with TCP SYN-ACK to everyone else. That was the point where my knowledge of the TCP internals was exhausted and I turned to google for a solution. As always, there are bunch of articles out there all with different ideas and very limited low-level explanations, so came back to tcpdump on Amazon server and started the game of “find 3 differences between two TCP ACK packets that arrive one from office laptop and one from office server”. The only two differences I managed to see were:
– TCP window size of packet from laptop was way bigger (29200) then from office server (5840)
– Timestamp value of packet from laptop was way smaller (64389040) then from office server (809044567)

Quote from tcpdump, first packet from laptop, second from office server:

xxx.xxx.xxx.xxx.55470 >yyy.yyy.yyy.yyy.22: Flags [S], cksum 0x8cb3 (correct), seq 3904091306, win 29200, options [mss 1460,sackOK,TS val 64393040 ecr 0,nop,wscale 8], length 0
15:53:00.755020 IP (tos 0x0, ttl 50, id 55870, offset 0, flags [DF], proto TCP (6), length 60)
zzz.zzz.zzz.zzz.43952 > yyy.yyy.yyy.yyy.22: Flags [S], cksum 0xcfbf (correct), seq 1790824553, win 5840, options [mss 1460,sackOK,TS val 809044567 ecr 0,nop,wscale 8], length 0
15:53:00.755071 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)

With the above two facts I started the investigation on TCP window size. I did remember that this metric can be dynamic and the difference is possible, but I thought it might be more problem then timestamp, that is obviously different all the time and who cares about timestamp anyway. Google showed me a number of options to try with regards to sysctl, including but nor limited to disabling TCP time scaling, adjusting different buffers of OS TCP stuck and so on, which I tried to apply everywhere including Amazon server, office server and office laptop all with no success. Finally, some post via google (lost original post) told that setting net.ipv4.tcp_tw_recycle to 0 solved the problem. Having no other alternatives I did apply the setting on Amazon server and all came back to normal – now everyone could connect to the server and all was working as it supposed to.

Since the problem was gone, I reported to my bother that he can continue with his other tasks as one problem less, made sure that the flag is set permanently in /etc/sysctl.conf and realized that now I need to learn more of TCP internals. Fortunately there is an amazing article by Vincent Bernat “Coping with the TCP TIME-WAIT state on busy Linux servers” that dives into how the whole thing works, why we should not mess with the TCP TIME-WAIT and that at the end, changing this flag will not give one any visible advantages.

As a resume of the above, before you change any kernel flags, make sure you really understand what you are doing. Before applying any configuration changes proposed by some online howto, make sure you exactly what you are doing and don’t trust anyone blindly. Finally – learn to troubleshoot with low-level tools that will help you spot the problem or at least show the directions for further troubleshooting.

P.S.: Leonid, thanks for fun experience and something new! Was fun!

JavaScript, Node.js, Meteor, MongoDB and related

For the past few weeks I’ve been playing with Meteor reactive framework which is heavily utilizing Node.js and MongoDB. It’s been a while since I did something in JavaScript and never ever before I tried something that can be called “reactive”. While few things a pretty weird and a lot of concepts are familiar, there were few moments that got me stuck for a bit and I want to post here just to remember in the future:

Net package for sockets from Node.js

Since my task required some plain socket communication with other service, I touched the default net package from Node.js and while from the first look it was pretty easy, there are couple of problems. Some of them are known (like binding sockets to fibers with bindEnvironment) and there are lots of post around on the forums and related sites, but one that got me go crazy was reading data from sockets line-by-line.

The on data event for the socket fires whenever there is some data coming, but it doesn’t mean you will receive it line-by-line. You can receive part of the line or lots of lines and the last one is not promised to be terminated by newline. The workaround that I found with help of Google was to use backlog, and whenever you receive some data, append it to the backlog and then shift data from backlog line-by-line, leaving whatever rest in backlog that is not a complete line in the log. The idea is clear and should work, but what happens if you receive some more data in the socket and on data event fires while you are still processing the previous call of such event? Not that easy I found out that now I have multiple callbacks manipulating same backlog and that ends up with a lot of mess in you data. Old-school ideas that I thought of was all kind of locks, tokens so on to prevent such behaviour, but did work out very well. Finally, the easiest way was to put socket on pause whenever some data is received, process that data and then resume the socket when processing of data is done. To make my life even easier, whenever I have a full line extracted from socket backlog, I was just emitting the line event on it and process it in different place. The code I end up with is as follows:

socket.on('data', Meteor.bindEnvironment( function (data) {
    socket.pause();
    socketBackLog += data;

    var n = socketBackLog.indexOf('\n');
    while (~n) {
        socket.emit('line', socketBackLog.substring(0, n));
        socketBackLog = socketBackLog.substring(n + 1);
        n = socketBackLog.indexOf('\n');
    }
    socket.resume();
}));
socket.on('line', Meteor.bindEnvironment( function (data) {
    processData(data.toString());
}));

Note the Meteor.bindEnvironment all over around callbacks for socket – this is the way to keep things in fibers, otherwise Meteor will complain and fail.

Loose data types

I know it is pretty common practice now and many languages do not force you to cast variables or define them with the particular type and that is somewhat cool in most of the cases, but sometimes I really miss C-style with all that strictness. This time with JavaScript was exactly the case. Did I ask to convert 1/0 in some cases to boolean true/false or to string “1”/”0″ when I stated it is 0/1??? Since I need integer type, my code is full of binary OR operations, that force JavaScript to keep variable to be unsigned integers.

Example inserting some stuff in MongoDB and forcing integer:

Stuff = new Mongo.Collection("stuff");
Stuff.insert({
    name: "Test",
    number_of_kids: (1 | 0)
});

Basically I wound do binary OR my “integer” variables with 0 and get the result.

MongoDB object relationship and related

That is still a mystery for me and I will try to sort it out eventually. Assuming I have one child collection that has two different parent collections, or, in more traditional way, two OneToMany relations where “One” is the same. In my case here how it would look:

ParentsA = new Mongo.Collection("parents_a");
ParentsB = new Mongo.Collection("parents_b");
Childs = new Mongo.Collection("childs");

var parent_a = ParentsA.insert({
    name: "parent 1 type 1",
    count: (0 | 0),
    childs: []
});

var parent_b = ParentsB.insert({
    name: "parent 1 type 2",
    count: (0 | 0),
    childs: []
});

var child = Childs.insert({
    name: "child 1"
    parent_a_id: parent_a,
    parent_b_id: parent_b
});
ParentsA.update(parent_a,{$inc: {childs: (1 | 0)},$addToSet: {childs: child}});
ParentsB.update(parent_b,{$inc: {childs: (1 | 0)},$addToSet: {childs: child}});

In my case, if I do find/findOne on the records, both parents will have in their childs a list of child IDs (not child objects), which I can assume is a normal thing, but strange thing comes with child record itself: it would have parent_a_id as a plain ID for parentA and in parent_b_id it will have the whole parentB object. So to find the ID of parentA I can call child.parent_a_id, but for parentB I have to call child.parent_b_id._id and until now I don’t know what controls this behaviour.

Another problem I faced is that, according my knowledge, there is no way to count number of items in parents childs field and I have to keep track with the count field. But good thing is the there few query modifiers in Mongo that makes my life easier. As you can see, I use $inc modifier to adjust the count as well as $addToSet to make sure I don’t put the same child to the parent twice.

Setting session variables on client from server

I really love all this reactive things, the way how client acts on collection changes and session adjustments, but one thing is still not clear for me – how can I adjust clients session variable from the server after some events happen. Simple example:

if (Meteor.isClient) {
    Template.some_template.events({
        'click .send_data': function (event,template) {
            Meteor.call('processData',template.find('input[name="data"]').value);
        }
    });
}
if (Meteor.isServer) {
    Meteor.methods({
        'processData': function (data) {
            var socket = new Imap({....});
            socket.once('ready', Meteor.bindEnvironment(function () {
                // here I want to set clients session "imap_ready" to true
            }));
            socket.connect();
        }
    });
}

What happens is whenever client clicks some .send_data button, we get the data from input and pass it to server’s method processData, which tries to establish the connection to IMAP server and if ok – I want to update clients session “imap_ready” variable. The problem here is that we don’t really know when (if at all) socket connection will emit the ready event and by sure, the processData will already return by that time, so using optional callback for Meteor.call is not an option as well.

For the time being I solved the problem by introducing a MongoDB collection which has session_id, key, value fields. Whenever client calls server methods of such kind, it passes session_id as additional argument (BTW had to use permanent session addon to avoid session losing session data and uuid addon to generate some nice session id), then whenever server has something it needs to pass back – it updates the relevant Mongo document and on the client side, I use observeChanges on the collection to gather all the data and put it in the session. Sounds weird, I don’t like this way, but it somehow works. If anyone can suggest a better way for the above problem – feel free to comment or contact me in any other way.

I think that’s enough for the time being. Maybe one day I will post a follow up with solutions (where applicable) to the problems above.