Running sudo in background

A common way of running bash commands in background is attaching “&” to the end of the command. But what happens if you need to run a sudo command in the background? Appending “&” to the end of “sudo whatever.sh &” will not work. According to this thread on SlackOverflow:

The problem is that the sudo command itself is being run in the background. As a result, it will be stopped (SIGSTOP) when it tries to access the standard input to read the password.

A simple solution is to create a shell script to run synaptic & and then sudo the script in the foreground (i.e. without &).

Yes, making a wrapper script that runs a sudo command and then calling this wrapper script with “&” at the end is a common way, but there is another alternative that I like (especially when you don’t really care about command output): use screen utility:

screen -d -m sudo whatever.sh

This will create a new detached screen session and run a sudo command inside of it without blocking the terminal (or program execution, if you run the command from some other script/program). If the command is long running, you will be able to attach screen and see output. When command completes execution, the screen session will be terminated.

Will leave it here for future reference.

AWS Beanstalk and more

While playing around with AWS Beanstack (EB) and related infrastructure, found out couple of thins that want to note for future reference.

Initial scenario: AWS Beanstalk environment in a private subnet, with AWS LoadBalancer (LB) serving requests and doing SSL termination. No need to do end-to-end SSL as actual EC2 instances are in private subnet. Also need to redirect HTTP to HTTPs, preferably on load balancer level.

 

So we first we need to create a VPC, Internet Gateway a some subnets. I have one public subnet (local routes + default via Internet GW) for service things like NAT gateways, bastion hosts, etc, two private subnets (local traffic only) for web instances (in different AZ) and two public subnets for load balancers.

Here come few first notes:

  • two have EC2 Instances of web servers in private subnet, you need to have LB in public subnet.
  • private subnets with EC2 Instances should have a routing table with default route to NAT Gateway, which MUST be in a public subnet (either along with LB or separate service subnet)
  • VPC MUST have DNS resolution and DNS hostnames set to YES

The above is more or less enough to start with EB, as it will create bunch of staff for you automatically (or you can find a way to mess around with plenty of things).

For EB:

  • Make sure the EB environment is linked to appropriate VPC before you start checking other settings, as this is what brings all those subnets, security groups and other options in.
  • User multi-instance setup with LB even if you plan to have only single instance. Just set the auto scaling max to 1, but this will give you bunch of options and flexibility later on.
  • Use Application LB instead of Classic one
  • Use your proper Key-Pair for EC2 instances in Security section as it will give you a chance to SSH to instances to troubleshot in case of problems (via bastion host or by temporarily making web subnet public and attaching elastic IP to instance)
  • Modify webroot in software configuration in case your project is not served directly from root of the project (/public, /webroot, etc)
  • Utilize Environment Variables for passing info about DB settings, DEBUG, etc, if your application supports it. Very handy.
  • Use Amazon Certificate Manager + Route53 for issuing and renewing SSL certificates that you can attach to LB.
  • Make sure you have both listeners in LB setup: for HTTP and HTTPs

When you environment is up and running, there are couple of things to adjust:

  • CNAME you environment domain to EB entrypoint domain
  • In EC2, Modify rule for EB Listener that is working on port 80 and add a rule on top of default one to redirect to HTTPS (same origin, path, args) when Path is *
  • In case you app uses full URLs, you may have a problem that it sends links in HTTP. In my case I am passing BASE_URL env var to the environment of the EC2 instances and my app picks it up from there and returns correct links and refs to other resources like JS & CSS.

This is just a short list of things to keep in mind. More things might come and most should go to AWS CloudFormation, but I had to play around via AWS Management Console manually first to get the feeling of the service.

TCP requests with plain bash in Linux

Recently I had a problem where I need to check if specific port of specific host was available before I can proceed further. Basically it was a docker composer running a container which depends on mysql service and I had to make sure mysql service was available before I can continue with my main docker container.

First I looked into nc implementation of a waiting part in the container entrypoint that looked something like this:

sh -c 'until nc -z mysql_host 3306; do sleep 1; done'

but it turned out I didn’t have nc installed in the container and was lazy enough to rebuild the whole thing just for this tiny tool.

Apparently, this discussion pointed out a solution:

sh -c 'until printf "" 2>>/dev/null >>/dev/tcp/mysql_hostname/3306; do sleep 1; done'

which was something I never saw before, so a bit of googling brought me to this nice page with an example how to make HTTP request and get response while working with /dev/tcp. Useful to know, so I am leaving it here.

Graph database

In one of the side projects I am involved into in my free time that helps me learn a bunch of things that are not really in my normal workflow, I faced a problem of working with some big sets of data (millions of rows per table, multiple tables, bunch of relations). Trying old ways of improving performance like tuning DB engines (MySQL, MariaDB), optimising code to use low-level queries, multi-row inserts, tweaks on data models, etc, didn’t give the desired results, so I went out googling and discovered graph databases. Not something new in general as Graph Theory is well known, but the use case is pretty interesting.

Not that I am already deep into it, but feels like I will spend some time looking into the technology. First I got a hint on Neo4j somewhere in StackOverflow, but didn’t like something about it and went on further googling of the subj. Ended up at Top 15 Free Graph Databases I first stopped on OrientDB, installed it for the test and played around and while it looks very promising, I have couple of issues with it:

  • Java: it is a personal issue of me not being in love with Java at all. Installing JDK on the server to run DB is something I would do only in case it is absolutely required for my complete happiness. Not that I had any issue during testing, I still don’t trust it somehow internally
  • Poor documentation: you can see pretty extensive documentation on their web-site, but it is a bit hard to navigate and find things around and when you seek Google help for what you need, you mostly end-up on 404, so either old version links are in Google and are not on the site or something else weird is going on.
  • Driver interfaces (outside of Java world) are bad documented or/and bad implemented, at least for PHP. Both, official PHPOrient and Doctrine ODM show only small snippets of usage with no clear overview of what is possible (apart from basic things)

While there are cons above, there are obviously some pros, like almost native SQL, easy install (even involving Java), nice tool-set, etc.

After reviewing the list of 15 databases, the second choice was ArangoDB, which is:

  • Written in C++
  • Has very good and solid documentation with lots of examples (even comparisons for people who came from traditional SQL background)
  • Has lots of pre-build packages for different operating systems and YUM repo for RedHat followers
  • Convincing benchmark comparison of different DB engines and scenarios (not gonna state about truth, as benchmarks are always tricky, but who doesn’t like graphs?)

Still need to put my hands on, but I think this is some nice journey.

If you are in the topic, please leave your thoughts and ideas in the comments or send me them via any other possible path of communication to save time and effort :-)

SSH via bastion host with ForwardAgent

While it is pretty common to have an infrastructure behind load-balancers and bastion hosts, there are still many confusion around actual configuration of the SSH client for fast and convenient use of the setup. While I am not going to talk about actual advantages of bastion hosts, I will put here some clarifications on the SSH client setup.

Assuming you you a bastion.host that you user as a connection gateway to your private.host and you want to work with your default SSH key that is only on your local PC/laptop, you have two possible way.

The one and the most commonly used is with SSH Agent forwarding, meaning you have to run ssh-agent on you laptop, add the SSH keys to it via ssh-add command (or use ssh-add -L to list all keys in the agent) and then user ForwardAgent yes in ~/.ssh/config, something like this:

Host bastion.host
    User ssh_user
    HostName bastion.host
    ProxyCommand none
    IdentityFile ~/.ssh/id_rsa
    PasswordAuthentication no
    ForwardAgent yes
Host private.host
    User ssh_user
    ProxyCommand ssh -q -A bastion.host -W %h:%p
    IdentityFile ~/.ssh/id_rsa
    ForwardAgent yes

And while this is all cool from one point of view, this method has few drawbacks:

  • Running ForwardAgent is not a good idea in terms of security, and you can read more about it here.
  • Running ForwardAgent requires you to actually configure and run ssh-agent on you local PC/laptop, which is not a big deal at all, but you will have to remember and check it all the time or you will have all kind of authentication errors and will spend sometime to find out the reason for them (not running/misconfigured agent

The second method to achieve the same functionality in terms of bastion host and avoid messing around ssh-agent is to use SSH ProxyCommand. In this scenario, when configured properly, ssh will first run the ProxyCommand to establish the connection to bastion.host and then enable the tunnel via this connection to the private.host. This way bastion.host will know nothing about your keys or anything related to authentication, but will just make a tunnel (similar to SSH port forwarding) and keep it for you until you are done.

To get this to work, you would adjust the ~/.ssh/config as follows:

Host bastion.host
    User ssh_user
    IdentityFile ~/.ssh/id_rsa
    ForwardAgent no
Host private.host
    User ssh_user
    ProxyCommand ssh -W %h:%p -q bastion.host
    IdentityFile ~/.ssh/id_rsa
    ForwardAgent no

So now as you have all in place and configured, you can ssh private.host and enjoy the stay on your secure server. While this is all cool, it has a lot of default things and assumptions behind the scene which you are not bothering to learn until you face a slightly different requirements: assume that you need to have the SSH configuration and per-host keys not in your home .ssh directory, but somewhere else. Lets say you have some /very/secure/location with separate ssh.conf (with the content from above) and a bastion.id_rsa and a private.id_rsa to use for the connections. To make them work you would assume that you only need to adjust the IdentityFile configurations to point to the correct keys and then run your SSH as follows: ssh -F /very/secure/location/ssh.conf private.host

Bad news – it will not work and will give you authentication error. Though you still will be able to access bastion.host with the above, you won’t be able to reach you final destination at private.host.

Good news – thanks to this lovely discussion at stack overflow, a minor adjustments have to be done in your ProxyCommand: you need to specify the ssh config file to it as well, so now it will look like: ProxyCommand ssh -F /very/secure/location/ssh.conf -W %h:%p -q bastion.host

Obviously the reason is that giving -F to initial ssh command, you instruct it to look for a specific configuration file, but when it will run the ProxyCommand, that instance of ssh client will have no clue whatsoever about your custom config and will look for default one in ~/.ssh/config and system-wide settings.

I’ve spent quite some time before I figured out what’s going on and in order not to do so again and hopefully save some of your time, let this post be here for future references