Blog of Alexander Mamchenkov http://alex.mamchenkov.net ... mammoth cave ... Thu, 28 May 2015 14:04:01 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.2 Blog of Alexander Mamchenkov http://0.gravatar.com/avatar/28fa14800d5183df0b0e73cf9c22b5ee.png?s=48 http://alex.mamchenkov.net JavaScript, Node.js, Meteor, MongoDB and related http://alex.mamchenkov.net/2015/05/28/javascript-node-js-meteor-mongodb-related/ http://alex.mamchenkov.net/2015/05/28/javascript-node-js-meteor-mongodb-related/#comments Thu, 28 May 2015 14:04:01 +0000 http://alex.mamchenkov.net/?p=5743 For the past few weeks I’ve been playing with Meteor reactive framework which is heavily utilizing Node.js and MongoDB. It’s been a while since I did something in JavaScript and never ever before I tried something that can be called “reactive”. While few things a pretty weird and a lot of concepts are familiar, there were few moments that got me stuck for a bit and I want to post here just to remember in the future:

Net package for sockets from Node.js

Since my task required some plain socket communication with other service, I touched the default net package from Node.js and while from the first look it was pretty easy, there are couple of problems. Some of them are known (like binding sockets to fibers with bindEnvironment) and there are lots of post around on the forums and related sites, but one that got me go crazy was reading data from sockets line-by-line.

The on data event for the socket fires whenever there is some data coming, but it doesn’t mean you will receive it line-by-line. You can receive part of the line or lots of lines and the last one is not promised to be terminated by newline. The workaround that I found with help of Google was to use backlog, and whenever you receive some data, append it to the backlog and then shift data from backlog line-by-line, leaving whatever rest in backlog that is not a complete line in the log. The idea is clear and should work, but what happens if you receive some more data in the socket and on data event fires while you are still processing the previous call of such event? Not that easy I found out that now I have multiple callbacks manipulating same backlog and that ends up with a lot of mess in you data. Old-school ideas that I thought of was all kind of locks, tokens so on to prevent such behaviour, but did work out very well. Finally, the easiest way was to put socket on pause whenever some data is received, process that data and then resume the socket when processing of data is done. To make my life even easier, whenever I have a full line extracted from socket backlog, I was just emitting the line event on it and process it in different place. The code I end up with is as follows:

socket.on('data', Meteor.bindEnvironment( function (data) {
    socket.pause();
    socketBackLog += data;

    var n = socketBackLog.indexOf('\n');
    while (~n) {
        socket.emit('line', socketBackLog.substring(0, n));
        socketBackLog = socketBackLog.substring(n + 1);
        n = socketBackLog.indexOf('\n');
    }
    socket.resume();
}));
socket.on('line', Meteor.bindEnvironment( function (data) {
    processData(data.toString());
}));

Note the Meteor.bindEnvironment all over around callbacks for socket – this is the way to keep things in fibers, otherwise Meteor will complain and fail.

Loose data types

I know it is pretty common practice now and many languages do not force you to cast variables or define them with the particular type and that is somewhat cool in most of the cases, but sometimes I really miss C-style with all that strictness. This time with JavaScript was exactly the case. Did I ask to convert 1/0 in some cases to boolean true/false or to string “1”/”0″ when I stated it is 0/1??? Since I need integer type, my code is full of binary OR operations, that force JavaScript to keep variable to be unsigned integers.

Example inserting some stuff in MongoDB and forcing integer:

Stuff = new Mongo.Collection("stuff");
Stuff.insert({
    name: "Test",
    number_of_kids: (1 | 0)
});

Basically I wound do binary OR my “integer” variables with 0 and get the result.

MongoDB object relationship and related

That is still a mystery for me and I will try to sort it out eventually. Assuming I have one child collection that has two different parent collections, or, in more traditional way, two OneToMany relations where “One” is the same. In my case here how it would look:

ParentsA = new Mongo.Collection("parents_a");
ParentsB = new Mongo.Collection("parents_b");
Childs = new Mongo.Collection("childs");

var parent_a = ParentsA.insert({
    name: "parent 1 type 1",
    count: (0 | 0),
    childs: []
});

var parent_b = ParentsB.insert({
    name: "parent 1 type 2",
    count: (0 | 0),
    childs: []
});

var child = Childs.insert({
    name: "child 1"
    parent_a_id: parent_a,
    parent_b_id: parent_b
});
ParentsA.update(parent_a,{$inc: {childs: (1 | 0)},$addToSet: {childs: child}});
ParentsB.update(parent_b,{$inc: {childs: (1 | 0)},$addToSet: {childs: child}});

In my case, if I do find/findOne on the records, both parents will have in their childs a list of child IDs (not child objects), which I can assume is a normal thing, but strange thing comes with child record itself: it would have parent_a_id as a plain ID for parentA and in parent_b_id it will have the whole parentB object. So to find the ID of parentA I can call child.parent_a_id, but for parentB I have to call child.parent_b_id._id and until now I don’t know what controls this behaviour.

Another problem I faced is that, according my knowledge, there is no way to count number of items in parents childs field and I have to keep track with the count field. But good thing is the there few query modifiers in Mongo that makes my life easier. As you can see, I use $inc modifier to adjust the count as well as $addToSet to make sure I don’t put the same child to the parent twice.

Setting session variables on client from server

I really love all this reactive things, the way how client acts on collection changes and session adjustments, but one thing is still not clear for me – how can I adjust clients session variable from the server after some events happen. Simple example:

if (Meteor.isClient) {
    Template.some_template.events({
        'click .send_data': function (event,template) {
            Meteor.call('processData',template.find('input[name="data"]').value);
        }
    });
}
if (Meteor.isServer) {
    Meteor.methods({
        'processData': function (data) {
            var socket = new IMAP({....});
            socket.once('ready', Meteor.bindEnvironment(function () {
                // here I want to set clients session "imap_ready" to true
            }));
            socket.connect();
        }
    });
}

What happens is whenever client clicks some .send_data button, we get the data from input and pass it to server’s method processData, which tries to establish the connection to IMAP server and if ok – I want to update clients session “imap_ready” variable. The problem here is that we don’t really know when (if at all) socket connection will emit the ready event and by sure, the processData will already return by that time, so using optional callback for Meteor.call is not an option as well.

For the time being I solved the problem by introducing a MongoDB collection which has session_id, key, value fields. Whenever client calls server methods of such kind, it passes session_id as additional argument (BTW had to use permanent session addon to avoid session losing session data and uuid addon to generate some nice session id), then whenever server has something it needs to pass back – it updates the relevant Mongo document and on the client side, I use observeChanges on the collection to gather all the data and put it in the session. Sounds weird, I don’t like this way, but it somehow works. If anyone can suggest a better way for the above problem – feel free to comment or contact me in any other way.

I think that’s enough for the time being. Maybe one day I will post a follow up with solutions (where applicable) to the problems above.

]]>
http://alex.mamchenkov.net/2015/05/28/javascript-node-js-meteor-mongodb-related/feed/ 0
Instagram Daily Digest http://alex.mamchenkov.net/2015/04/14/instagram-digest-20150414/ http://alex.mamchenkov.net/2015/04/14/instagram-digest-20150414/#comments Tue, 14 Apr 2015 09:01:01 +0000 http://alex.mamchenkov.net/?p=5730 Жена в процессе себяшек) Вид из окошечка деревенской таверны в горах ]]> http://alex.mamchenkov.net/2015/04/14/instagram-digest-20150414/feed/ 0 Instagram Daily Digest http://alex.mamchenkov.net/2015/04/05/instagram-digest-20150405/ http://alex.mamchenkov.net/2015/04/05/instagram-digest-20150405/#comments Sun, 05 Apr 2015 09:00:17 +0000 http://alex.mamchenkov.net/?p=5724 Trio cream Brule Steak time ]]> http://alex.mamchenkov.net/2015/04/05/instagram-digest-20150405/feed/ 0 Instagram Daily Digest http://alex.mamchenkov.net/2015/04/02/instagram-digest-20150402/ http://alex.mamchenkov.net/2015/04/02/instagram-digest-20150402/#comments Thu, 02 Apr 2015 09:00:13 +0000 http://alex.mamchenkov.net/?p=5716 Сто лет в снукер не играл Это Лимассол)]]> http://alex.mamchenkov.net/2015/04/02/instagram-digest-20150402/feed/ 0 Instagram Daily Digest http://alex.mamchenkov.net/2015/03/27/instagram-digest-20150327/ http://alex.mamchenkov.net/2015/03/27/instagram-digest-20150327/#comments Fri, 27 Mar 2015 09:01:46 +0000 http://alex.mamchenkov.net/?p=5710 Вот Лапка с коготочками. Наш маленький серый котёнок, которому скоро будет 3 года. За траву душу продаст]]> http://alex.mamchenkov.net/2015/03/27/instagram-digest-20150327/feed/ 0 Instagram Daily Digest http://alex.mamchenkov.net/2015/03/15/instagram-digest-20150315/ http://alex.mamchenkov.net/2015/03/15/instagram-digest-20150315/#comments Sun, 15 Mar 2015 09:00:27 +0000 http://alex.mamchenkov.net/?p=5696 Gp3 Tio Ellinas Что может быть лучше этой чудесной улыбки рядом весь день?]]> http://alex.mamchenkov.net/2015/03/15/instagram-digest-20150315/feed/ 0 Instagram Daily Digest http://alex.mamchenkov.net/2015/03/09/instagram-digest-20150309/ http://alex.mamchenkov.net/2015/03/09/instagram-digest-20150309/#comments Mon, 09 Mar 2015 09:00:28 +0000 http://alex.mamchenkov.net/?p=5687 The first thing I saw today in the morning) Service and fixing cooling issues ]]> http://alex.mamchenkov.net/2015/03/09/instagram-digest-20150309/feed/ 0 Instagram Daily Digest http://alex.mamchenkov.net/2015/02/10/instagram-digest-20150210/ http://alex.mamchenkov.net/2015/02/10/instagram-digest-20150210/#comments Tue, 10 Feb 2015 09:00:59 +0000 http://alex.mamchenkov.net/?p=5674 Уютный конец ]]> http://alex.mamchenkov.net/2015/02/10/instagram-digest-20150210/feed/ 0 Instagram Daily Digest http://alex.mamchenkov.net/2015/01/26/instagram-digest-20150126/ http://alex.mamchenkov.net/2015/01/26/instagram-digest-20150126/#comments Mon, 26 Jan 2015 09:00:23 +0000 http://alex.mamchenkov.net/?p=5665 Спокойствие... Вот уж точно - весело Просыпашки в рестарашке) Курочка из духовки на обед. Жена умничка]]> http://alex.mamchenkov.net/2015/01/26/instagram-digest-20150126/feed/ 0 Plain SQL v.s. Zabbix API for text history items http://alex.mamchenkov.net/2015/01/18/plain-sql-v-s-zabbix-api-text-history-items/ http://alex.mamchenkov.net/2015/01/18/plain-sql-v-s-zabbix-api-text-history-items/#comments Sun, 18 Jan 2015 10:31:36 +0000 http://alex.mamchenkov.net/?p=5660 For one of the past tasks I have the following requirement: a number of hosts that are monitored with Zabbix has a item of history type that provide list of addresses in text format (one per line) and than I need to take all those lists and make one common list (table) where I would have in each row an address from the list, a hostname of the node where this address was last seen and timestamps.

Initially I added in zabbix the item to monitor on each node I needed, created a separate table in MySQL to hold the final list and then made a cron script that would do the following:

  • retrieve a list of nodes that have a given item by item key_ with Zabbix API
  • retrieve all items by itemid that I found out in a previous query with Zabbix API
  • for each item, retrieve the latest history with Zabbix API
  • for each history, split the text by new line to get addresses and then add each address with source host and timestamps in the final MySQL table, or update the timestamp and source in case address is already in the list

With total size of the list about 5K addresses, all of the above was taking around 4-5 minutes and was consuming a lot of CPU and memory on the server. As I was limited on server resources and wanted the list to be updated every minute, I decided to avoid using Zabbix API and try to the job with plain MySQL queries. As I am only interested in the latest history for each node, I recalled my own post that I did a while ago on SQL GROUP BY with subqueries. Checking zabbix SQL structure around and a bit of playing with queries, I ended up with the single request that will give me all I need:

SELECT * FROM (SELECT h.host,hi.id,hi.value\
FROM hosts AS h, items AS i, history_text AS hi\
WHERE i.hostid=h.hostid AND hi.itemid=i.itemid\
AND h.status<>3 AND i.key_ LIKE 'my_item_key%'\
AND hi.value <> '' ORDER BY hi.clock DESC) tmp_table\
GROUP BY host;

The sub-query will give me hostname, history entry id and history values from history_text for non template hosts (status <> 3), with non empty value for the item key_ I want order by time newest first and then the main query will take that list shrink it down to have only on entry per host which will be the newest one.

Now having this list, for each result raw, I can split the value by newline to extract all addresses and add them or update one by one in the final table. Here comes in another trick that I that I described in my old post here: since I need to update the source for already existed entries in the final table while adding anything that is not there, I run the insert with the following SQL statement, considering that the address field is a primary key and is unique:

INSERT INTO final_table (address,source)\
VALUES('$address','$source')\
ON DUPLICATE KEY UPDATE source='$host';

So whenever there is a conflict on address field, the source field is getting updated with the new value.

After changing Zabbix API queries to native SQL, the script runs few seconds and consumes almost nothing, as it is relying on MySQL engine to do most of the job, I MySQL can do it much better.

Finally, if there is no interest in history items after they were imported into final_table, it is possible to delete all raws for the items from history_text for a given key_ table with the following SQL query:

DELETE FROM history_text WHERE itemid IN\
(SELECT hi.id FROM items AS i, history_text AS hi\
WHERE hi.itemid=i.itemid AND i.key_ LIKE 'my_item_key%');

This is an alternative to relying on Zabbix housekeeper that will do the job, but a bit later. And if polling of the nodes for this item is pretty frequent and resulted values are pretty big – it will consume space in MySQL that we want to avoid.

]]>
http://alex.mamchenkov.net/2015/01/18/plain-sql-v-s-zabbix-api-text-history-items/feed/ 0