The web is littered with stories of people who love their Raspberry Pis but are disappointed to learn that the Pi often eats the SD card. I’ve recovered a card once, but otherwise had a few that have been destroyed and were not recoverable. I’ll lay out how I use This One Weird Trick(tm), ahem, to try and prolong the life of the SD card.
First I should point out that my Pi storage layout is not typical. I basically followed this guide to boot from SD card, but run the root filesystem on a flash drive. While the stated purpose of the guide is to help reduce activity on the SD card (and improve storage performance somewhat), I come at the SD card corruption issue from a different perspective.
In my view, the corruption is most likely caused by a timing bug which could be rather low-level in the design or implementation of the hardware itself. Writing to the card less often probably reduces the chances of corruption, but my personal feeling is that after a Pi has been powered on for a certain amount of time, you can’t really predict if the bug is going to manifest. I don’t believe that most instances of SD card corruption happen in the first hours or days of a Pi booting up, so my goal was to only write to it within that initial period of time, if possible.
After following the guide linked above, the SD card is now only hosting the /boot partition. After init has started on / (the external storage), we really don’t need /boot any longer. In the middle of my /etc/rc.local file, I’ve added mount -o ro,remount /boot
In the typical usage of a running system, /boot doesn’t really need to be mounted read-write. Of course, if you forget it’s mounted read-only, then things like apt-get upgrade or rpi-update may certainly fail. Now when I want to run those commands I first reboot the Pi, and remount the /boot partition with sudo mount -o remount,rw /boot
Once the updating is done, I reboot again and leave /boot read-only.
OpenVPN is working great and all, but I was having trouble getting my other LAN hosts to connect to the OpenVPN client system (a Raspberry Pi) while also keeping the services I normally run on it available from the internet. On the remote server, I was using redirect-gateway def1, which works but makes some assumptions about how you intend to use it.
After a lot of frustration and perusal of almost-but-not-quite posts on OpenVPN troubleshooting, I came across an article which didn’t mention OpenVPN but instead discussed how to set default routes for multiple interfaces.
Here’s what I took away. Extra lines in /etc/openvpn/client.conf:
/sbin/ip route add defaultvia _local_gateway_ dev eth0 table mypriv
/sbin/ip rule add from _local_ip_/32table mypriv
/sbin/ip rule add to_local_ip_/32table mypriv
One caveat: I haven’t done a ton of testing, and after rebooting my Pi, it didn’t come up cleanly, so a down.sh script may be needed to tear down the extra config when OpenVPN disconnects. That being said, I have services available from the internet, connections from the LAN to the Pi working, and the default route for outgoing connections still going over the VPN.
Note: I’m running my Raspberry Pi as a server, and NetworkManager is not installed.
I discovered that if you want to manually assign search and nameserver entries in your /etc/resolv.conf file, you can’t just add the relevant entries to static entry in /etc/network/interfaces:
iface eth0 inet static
For some unknown reason, the resolvconf utility will still attempt to query an upstream DHCP server to get additional name service data. I don’t know why it works this way, I believe it should be hands-off if you’ve specified static in your interfaces file. I finally found that dhcpcd was called to get the info, and added the following line to /etc/dhcpcd.conf to disable actions relating to eth0:
I suppose if I wanted additional interfaces to work properly using dhcp, I’d have to get rid of all this and configure each interface manually via NetworkManager or wicd.
I would really like to rid myself of Dropbox, but all the alternatives I’ve tried are too bloated, beta- or alpha-quality stage, too complicated to set up, or just plain don’t do what Dropbox does (minus the sharing stuff, which I don’t care about). I don’t want btsync, it’s closed-source. Seafile is too complicated, and makes dubious security claims. Owncloud is a cool project, but their file sync is slow, error prone, and has other limitations. There are some good services, but they don’t run on all the platforms I need, including Mac OSX, Linux x86 (32 and 64-bit), Linux ARMv6 (my Raspberry Pi B) and Android. I ran Syncthing for a while, but the continuous memory usage is pretty steep for the Pi, and I’ve experienced random silent file truncation in my shared directories with it. So I needed something else. Continue reading “Unison dependency hell”
Yes, the Raspberry Pi can do fast video encoding. Of course you normally wouldn’t want to re-encode any video with an ARM processor, but that’s not what we’re going to do here. We’re going to leverage the GPU. I should point out before proceeding that the input formats for re-encoding are limited in this method, more about that below.
In order to do this, I’m using a proof-of-concept tool called omxtx, which I think is supposed to be a shortened form of “OpenMAX Transcoding”. Off the top of my head, here are the prerequisites for building the binary from source:
Raspbian. It will probably work on other RPi distros, but I haven’t tried them.
The build-essential package installed, which you normally need to build anything.
Memory split of 64MB for video. I previously had this all the way down to 16 since I don’t use a display on my Pi, but bumping it to only 32MB caused runtime errors from the omxtx binary. You need to give the GPU some breathing room to encode video.
There’s probably some libraries you may or may not have installed that the build wants to link in. When I run ldd on my finished binary, it loads all kinds of media libs like libav, libvorbis, libvpx, etc. YMMV.
I was curious to see how quickly I could transfer files to my Pi using SSH rather than FTP. Obviously using FTP is way faster than almost any other method, but still I wanted to see how fast I could transfer data over SSH.
Here’s the time it took to transfer a 50 MB file to my Pi using different SSH ciphers.
I later re-tested the aes128-ctr cipher and it took about a second less than what I’d recorded initially. This boils down to:
Don’t use triple-DES ever, for both performance and security reasons
Most other ciphers give about the same performance, and are generally considered secure
arcfour is the fastest class of ciphers, but there is less trust in it from the crypto community. If you’re going to use it, try to avoid the base arcfour cipher and instead use the 128 or 256 version, which tosses out some of the initial bits as a precaution
The usual suspects failed me last night when the $DISPLAY environment variable wasn’t being set after I logged in via SSH to my Pi. The usual suspects being to make sure that the X11 forwarding options were turned on in /etc/ssh/sshd_config on the server and in ssh_config on the client, or to use the command line options -X or -Y.
So I tried logging in again with the debug level turned up (-vvv) and saw the message, X11 forwarding request failed on channel 0. I had remembered from when this happened to me before that you also need a particular package on the server side to allow X11 authentication, whatever package contains the xauth binary. However, it was there and seemed to be working properly.
The Googles turned up this link, which showed that a new option may need to be in your sshd_config on a newer version of OpenSSH:
I then did a sudo service ssh restart, which thankfully is smart enough not to kill your existing SSH session, and logged in again. Finally, I saw