kworker using cpu on an otherwise idle system

I have an old thin client that I upgraded to a home server by adding some additional RAM and storage. I noticed after a recent kernel upgrade that the system seemed sluggish at times, despite doing nothing in particular at the time. top showed that a kworker process was using CPU, not all of it, but perhaps 25 to 50% of the total CPU.

I did a lot of searching to try and track down the offender. I used tools such as perf and iotop, read about various tunables under /proc related to power management. Finally, I ran Intel’s powertop command. It showed that “Audio codec alsa…” was hammering on some event loop.

I looked at the loaded kernel modules, and on a whim, I did sudo rmmod snd_hda_intel and that fixed the issue for me.

Others may find that a kworker is running in a tight loop for some other reason. It could be some other misbehaving driver or an I/O problem.

Keep getting logged out from Selfoss on Debian

I’m running Selfoss RSS reader and loving it!

One thing I don’t love is that it logs me out frequently (BTW, I’m running Apache php-fpm on Debian Jessie). But I think I found a solution. Try adding this to a file called .user.ini in the document root of Selfoss:

The 604800 means one week. If you’re running mod_php rather than FPM, you can add these lines to your .htaccess file.

UPDATE: The format for .user.ini is not the same used in .htaccess. The .user.ini version looks like this:

Use whatever cache_limiter() setting suits your needs best.

Debian server DNS bogosity

Note: I’m running my Raspberry Pi as a server, and NetworkManager is not installed.

I discovered that if you want to manually assign search and nameserver entries in your /etc/resolv.conf file, you can’t just add the relevant entries to static entry in /etc/network/interfaces:

For some unknown reason, the resolvconf utility will still attempt to query an upstream DHCP server to get additional name service data. I don’t know why it works this way, I believe it should be hands-off if you’ve specified static in your interfaces file. I finally found that dhcpcd was called to get the info, and added the following line to /etc/dhcpcd.conf to disable actions relating to eth0:

I suppose if I wanted additional interfaces to work properly using dhcp, I’d have to get rid of all this and configure each interface manually via NetworkManager or wicd.

GNU xargs is missing the -J option. WHY!?!

I find that using an idiom like

is so useful. It replaces the replstr (“%” in this example) with all the arguments at once, or as many as can fit without going over the system’s limit. I couldn’t believe it when I learned that the GNU version of xargs lacks this flag. Yes, it’s only on the BSD xargs as far as I can tell.

Every time I’ve searched, someone suggests using the -I flag on GNU xargs instead, but they are not quite the same. The -I flag substitutes the replstr one argument at a time, so that in the earlier example, instead of executing

only once, with the -I flag it will instead do

I’ve also tried using the -n and -L flags, but they are mutually exclusive with each other and with -I. OK, so we need some kind of klugey workaround.

This adds the “bar/” suffix to the standard input before adding it to the end of the mv command. “But,” you say, “those strings are supposed to be null-terminated!” True, but we’re providing a suffix rather than an extra replacement argument, so the EOF signaled from the input stream is really all we need.

There’s another, more intuitive way, but harder to get right; get the argument list output from a subshell command:

But this suffers from not handling weird file names the right way. Instead one could do:

This actually works better for file names, but lacks the flexibility of find.

Is this stuff really what we ought to do? Just give us the -J, GNU. If you know a different way to deal with this, tweet me @realgeek and I’ll update this post.

Unison dependency hell

UnisonI would really like to rid myself of Dropbox, but all the alternatives I’ve tried are too bloated, beta- or alpha-quality stage, too complicated to set up, or just plain don’t do what Dropbox does (minus the sharing stuff, which I don’t care about). I don’t want btsync, it’s closed-source. Seafile is too complicated, and makes dubious security claims. Owncloud is a cool project, but their file sync is slow, error prone, and has other limitations. There are some good services, but they don’t run on all the platforms I need, including Mac OSX, Linux x86 (32 and 64-bit), Linux ARMv6 (my Raspberry Pi B) and Android. I ran Syncthing for a while, but the continuous memory usage is pretty steep for the Pi, and I’ve experienced random silent file truncation in my shared directories with it. So I needed something else. Continue reading “Unison dependency hell”

Allow webapps to make outgoing requests

I was experiencing a pretty bad slowdown while trying to use the admin pages of a WordPress site recently. The load on the machine was quite low, so I began to suspect that it was trying to call out to external services (facebook, pinterest, etc) that might have been blocked by CSF (configserver firewall).

I started playing around with tcpdump and friends and then realized that the information I was looking for (blocked outgoing requests) was already being logged in /var/log/kern.log on our Ubuntu system (same on Debian). Continue reading “Allow webapps to make outgoing requests”

Raspberry Pi can do fast video encoding

Yes, the Raspberry Pi can do fast video encoding. Of course you normally wouldn’t want to re-encode any video with an ARM processor, but that’s not what we’re going to do here. We’re going to leverage the GPU. I should point out before proceeding that the input formats for re-encoding are limited in this method, more about that below.

In order to do this, I’m using a proof-of-concept tool called omxtx, which I think is supposed to be a shortened form of “OpenMAX Transcoding”. Off the top of my head, here are the prerequisites for building the binary from source:

  • Raspbian. It will probably work on other RPi distros, but I haven’t tried them.
  • The build-essential package installed, which you normally need to build anything.
  • Memory split of 64MB for video. I previously had this all the way down to 16 since I don’t use a display on my Pi, but bumping it to only 32MB caused runtime errors from the omxtx binary. You need to give the GPU some breathing room to encode video.
  • There’s probably some libraries you may or may not have installed that the build wants to link in. When I run ldd on my finished binary, it loads all kinds of media libs like libav, libvorbis, libvpx, etc. YMMV.

Continue reading “Raspberry Pi can do fast video encoding”

Clone hard disk with rsync

I recently wanted to move a system over to a faster, larger SSD. I didn’t want to have to re-install an OS, figure out which old files to transfer over, and then re-configure everything. That’s not a fun time in my book.

Here’s what I did (on a live system, yeah!) to clone my disk. Note that this may cause data loss, don’t blame me, keep backups, blah blah…

First, use a partition tool like GNU parted to create a nice big partition on the new drive and mark it as bootable. Leave some space for other partitions or swap space. If you use a separate /boot partition, then I think that needs the bootable flag instead. I’m only using a single root partition and swap. For the purposes of this tutorial, I’ll call my new root partition /dev/sdb1. YMMV.

Wait a while.

Take note of the UUID listed for /dev/sdb.

Or use whatever editor you like and put the UUID for /dev/sdb in place of the existing UUID for /.

Now you should just need to swap out the drives.

Fix for LFD error in syslog

I noticed that I was getting emails from LFD (part of the ConfigServer Firewall package) about failing to find some added check line it was sending to syslog.

The syslog message looks like this:
lfd[%d]: *SYSLOG CHECK* Failed to detect check line [%s] sent to SYSLOG

Of course I’ve replaced the pid with %d and the check string that it’s looking for with %s, since that will vary.

The fix is simple. Just like how you may need to adjust the path in /etc/csf/csf.conf to the real location of the ipset binary, you also may need to set where your SYSLOG messages are going. On an Ubuntu system, that means /var/log/syslog rather than /var/log/messages. Then just run csf -r to restart LFD with the new settings.

/var/log/messages appears in more than just csf.conf. Since /var/log/messages doesn’t exist on my system, I’m just going to symlink it to syslog and see what happens.

OK, I thought better of it and just modified csf.syslogs and csf.logfiles. I deleted that messages symlink in /var/log next. LFD was still being a little bitch after I restarted using csf -r, so I ran service lfd stop and then started it again.

Switching from APF to CSF

CSF logoI was enjoying trying out APF on my Raspberry Pi, but I noticed that it wasn’t blocking repeat attackers the way I wanted it to. fail2ban was working the way it was supposed to work, but it only blocks temporarily, and I never figured out why the gamin back-end to continuously monitor log files didn’t work reliably. I tried to work around that with some extra iptables rules, but was still getting hammered by folks. It made me sad.

ConfigServer Security & Firewall, CSF, has been great so far. Reading through the main config file takes time but that’s good because it’s so well documented. I admit I’m not digging the extensive tuning needed to stop the seemingly endless squawking about IDS-related features (process resources, funny process names, custom cron scripts, etc.) so for now that’s turned off. I may fine-tune it soon.

Other things I like about CSF: optional automatic updates, built-in connection limits and rate limits, the idea of having separate allowed and ignored groups (allowed group may still be banned if not also in the ignored group, which is a nuanced distinction), lots of flexibility & customization, and it also has IPSET support for ultra-fast rule matching!

Fix the broken APF package on Debian/Ubuntu

R-fx networks logoThe Debian / Ubuntu package for Advanced Policy Firewall (APF) seems a bit unmaintained. By default it won’t run without some initial tweaking. Note that they probably want everyone to just download and run the installer from their site nowadays, but that’s not how I roll (usually).

In functions.apf, change the line


That allows the basic functionality of the software to work. Next, for the sake of upgrade-ability, I copy /etc/apf-firewall/conf.apf to /etc/apf-firewall/ Then the only change needed to the installed config is to source the .my file. Here’s the bottom of the file:

Since it won’t work if you try to source the internals.conf file twice, you need to make sure that the last line in the .my file is commented or removed. Now you can edit the other values in the .my file to your liking. Remember to turn off devel mode and change /etc/default/apf-firewall when you’re satisfied with any config changes, then restart the service in the usual way.

Quick Linux ACL

I wanted a directory and everything under it to always get the same owner, group and mode, regardless of who created the files. Access Control Lists to the rescue.

I had to apt-get install acl to get the setfacl command. I’m not exactly clear on why I repeat two regular ACLs with the “d:” prefix to make them default ACLs. Why not just use the default syntax exclusively?

Source: SuperUser