Monitor load in the terminal

Just a small one that tends to get my attention when I have a terminal open into a remote machine and would like to notice right away when the load rises. Of course one could just use the echo -e "\a" if you have visual bell enabled or if you want to annoy people around you and have audio bell enabled.

Prolong the life of the SD card in your Raspberry Pi

The web is littered with stories of people who love their Raspberry Pis but are disappointed to learn that the Pi often eats the SD card. I’ve recovered a card once, but otherwise had a few that have been destroyed and were not recoverable. I’ll lay out how I use This One Weird Trick(tm), ahem, to try and prolong the life of the SD card.

First I should point out that my Pi storage layout is not typical. I basically followed this guide to boot from SD card, but run the root filesystem on a flash drive. While the stated purpose of the guide is to help reduce activity on the SD card (and improve storage performance somewhat), I come at the SD card corruption issue from a different perspective.

In my view, the corruption is most likely caused by a timing bug which could be rather low-level in the design or implementation of the hardware itself. Writing to the card less often probably reduces the chances of corruption, but my personal feeling is that after a Pi has been powered on for a certain amount of time, you can’t really predict if the bug is going to manifest. I don’t believe that most instances of SD card corruption happen in the first hours or days of a Pi booting up, so my goal was to only write to it within that initial period of time, if possible.

After following the guide linked above, the SD card is now only hosting the /boot partition. After init has started on / (the external storage), we really don’t need /boot any longer. In the middle of my /etc/rc.local file, I’ve added
mount -o ro,remount /boot

In the typical usage of a running system, /boot doesn’t really need to be mounted read-write. Of course, if you forget it’s mounted read-only, then things like apt-get upgrade or rpi-update may certainly fail. Now when I want to run those commands I first reboot the Pi, and remount the /boot partition with
sudo mount -o remount,rw /boot

Once the updating is done, I reboot again and leave /boot read-only.

kworker using cpu on an otherwise idle system

I have an old thin client that I upgraded to a home server by adding some additional RAM and storage. I noticed after a recent kernel upgrade that the system seemed sluggish at times, despite doing nothing in particular at the time. top showed that a kworker process was using CPU, not all of it, but perhaps 25 to 50% of the total CPU.

I did a lot of searching to try and track down the offender. I used tools such as perf and iotop, read about various tunables under /proc related to power management. Finally, I ran Intel’s powertop command. It showed that “Audio codec alsa…” was hammering on some event loop.

I looked at the loaded kernel modules, and on a whim, I did sudo rmmod snd_hda_intel and that fixed the issue for me.

Others may find that a kworker is running in a tight loop for some other reason. It could be some other misbehaving driver or an I/O problem.

Finding how much time Apache requests take

When a request is logged in Apache’s common or combined format, it doesn’t actually show you how much time each request took to complete. To make reading logs a bit more confusing, each request is logged only once it’s completed. So a long-running request may have an earlier start time but appear later in the log than quicker requests.

To help look at some timing info without going deep enough to need a debugger, I decided that step one was to use a custom log format that saved the total request time. After adding usec:%D to the end of my Apache custom log format, we can now see how long various requests are taking to complete.

tail -q -1000 *access.log | mawk 'FS="(:|GET |POST | HTTP/1.1|\")" {print $NF" "$6}' | sort -nr | head -100 > /tmp/heavy2

I’m using the “%D” format for compatibility with older Apache releases, which reports the response time in microseconds. I would prefer milliseconds, but when I tried using “%{ms}T” on a server running 2.4.7, it didn’t work; too old. This output is a bit hard to read when looking at the numbers, so we can try to add in a little visual aid with commas as the thousands separator.

cat /tmp/heavy2 | xargs -L 1 printf "%'d %s\n" | less

Note that because we are measuring the total request time, some of the numbers may be high due to remote network latency or a slow client. I recommend correlating several samples before blaming some piece of local application code.

Hope this helps finding your long-running requests!

Default route via VPN while keeping LAN & services available

OpenVPN is working great and all, but I was having trouble getting my other LAN hosts to connect to the OpenVPN client system (a Raspberry Pi) while also keeping the services I normally run on it available from the internet. On the remote server, I was using redirect-gateway def1, which works but makes some assumptions about how you intend to use it.

After a lot of frustration and perusal of almost-but-not-quite posts on OpenVPN troubleshooting, I came across an article which didn’t mention OpenVPN but instead discussed how to set default routes for multiple interfaces.

Here’s what I took away. Extra lines in /etc/openvpn/client.conf:

and in multiple_gateways.sh:

One caveat: I haven’t done a ton of testing, and after rebooting my Pi, it didn’t come up cleanly, so a down.sh script may be needed to tear down the extra config when OpenVPN disconnects. That being said, I have services available from the internet, connections from the LAN to the Pi working, and the default route for outgoing connections still going over the VPN.

Tunnelblick disconnect fails to remove route

Tunnelblick is an awesome OpenVPN client, which I have been using a lot lately on my Mac. I had a problem where it would connect the first time just fine, but then would never reconnect; it would seem to hang while trying to handshake with the server. I could get it to work again if I rebooted my machine, but that’s powerfully inconvenient.

TL;DR temporary fix:
On disconnect, Tunnelblick fails to remove a static route it used while active. I created a script that I run after disconnecting which drops the static route. It basically just does this:

The 192 address makes an assumption that you didn’t customize that part of the config, so YMMV.

scary rando stuff

You don’t see stuff like this everyday (I hope).

Rename and Iconv are like Chocolate and Peanut Butter

The features of iconv are probably built into Perl rename (aka prename), but when I tried the example from the man page, it kept generating an error message. I presume this is due to a missing Perl module. Here’s the error:

So I suppose if we don’t care much about speed then we can just use an external utility like iconv to help convert a whole bunch of crap file names to something a little more universally-digestible. Change to the directory with the unfriendly file names and then run this rename and iconv combo to bulk rename them:

I just renamed over two thousand files with this so I haven’t looked at all of them, but so far it looks like it’s done a great job.

rename 0.3 you can find here: http://www.perlmonks.org/?node_id=303814, but I got it from homebrew.

Speed of the sort command

GNU sort is normally crazy fast at what it does. However, recently I was trying to sort & unique several huge files and it seemed to be taking way too long. I did a little googling, and realized that it takes a lot longer to sort the full range of Unicode characters because it has to decode one or more bytes (UTF-8) before deciding where a character should be placed. There’s an easy way to increase the speed of the sort command, given a few caveats.

I’m not sure how I haven’t run into this already, but I love whenever I run into one of these little gems. The solution is pretty simple:

The C locale simply uses byte-ordering, so non-ASCII characters may end up in the wrong place. If you don’t need strict lexicographical sort, just a consistent sort, this seems to be the way to go.

Moving Evernote notes into WordPress

proprietary insecurity

I’ve accumulated many notes (2000+) in Evernote over the years, and love that it can store binary attachments such as images or other media files. My favorite feature is the Evernote Web Clipper browser extension; it does a fantastic job at saving the parts of an article I want to save while keeping the styling intact.

Evernote has a free plan which I’ve enjoyed for a long time, but recently the financial status of the company has come into question, and they restricted syncing to only two devices. Also, the last thing I want to happen is another kind of Google Reader shutdown fiasco. I doubt that a shutdown would make my existing notes disappear, but it’s better to be prepared ahead of time. To that extent, I’ve been looking for a viable option to migrate my notes into another platform. Continue reading “Moving Evernote notes into WordPress”

Apache, Fastcgi, PHP 7 on Debian Wheezy & Ubuntu 14.04

Intro: The Tyranny of Prefork

There are a lot of tutorials out there that go through the rote instructions on upgrading your Debian or Ubuntu system to use PHP 7. While I’m sure most of them are fine, they assume you’d want to use the prefork process model or event/threaded via CGI (via proxy and fcgi modules). While prefork is certainly battle-tested, it uses a ton more memory than it needs to, so I’m going to document how to upgrade an existing Fastcgi install to PHP 7. Continue reading “Apache, Fastcgi, PHP 7 on Debian Wheezy & Ubuntu 14.04”

distribution: histograms in the terminal

My new favorite tool is a python program called distribution that can easily show histograms in your terminal:

I used homebrew to install it, but you can see some usage examples and a few other tools on this stackoverflow page. I eagerly anticipate showing off some histograms to people.

Debian server DNS bogosity

Note: I’m running my Raspberry Pi as a server, and NetworkManager is not installed.

I discovered that if you want to manually assign search and nameserver entries in your /etc/resolv.conf file, you can’t just add the relevant entries to static entry in /etc/network/interfaces:

For some unknown reason, the resolvconf utility will still attempt to query an upstream DHCP server to get additional name service data. I don’t know why it works this way, I believe it should be hands-off if you’ve specified static in your interfaces file. I finally found that dhcpcd was called to get the info, and added the following line to /etc/dhcpcd.conf to disable actions relating to eth0:

I suppose if I wanted additional interfaces to work properly using dhcp, I’d have to get rid of all this and configure each interface manually via NetworkManager or wicd.