Setting I2C bus speed on a Raspberry Pi via Device Tree

This tutorial is based on my previous article where we installed pure Debian 9 with a recent mainline/vanilla Linux kernel, and so differs from what would be done on a Raspbian Distribution with a Raspbian kernel. In this article, we will set the I2C bus speed on a Raspberry Pi. Here is my previous article:

https://blog.michael.franzl.name/2016/10/31/raspberry-pi-debian-stretch/

Device Trees

The I2C bus on the Broadcom BCM283x chips found on Raspberry Pi’s is well and directly supported by the mainline/vanilla Linux kernel. Since with the Raspberry Pi we’re dealing with a System on a Chip (SoC), and not a regular PC, the hardware is configured with so-called device trees, which is a low-level description of the chip hardware compiled from text into binary format.

The rpi23-gen-image script mentioned in my previous tutorial installs the binary device tree into /boot/firmware/bcm2836-rpi-2-b.dtb. The U-Boot bootloader can read this file and pass it to the Linux kernel which interprets it and enables all the mentioned features in it.

The clock frequency for the I2C bus is configured in this .dtb file, and the default is 100kHz. There is a tool which allows you to inspect the .dtb file, outputting regular text. With this tool you also can make changes to the device configuration. Nowadays, this is the proper way to configure low-level devices on SoC’s!

 

Read the device tree

This will output the decoded device tree as text. Regarding I2C, you will find i2c@-entries like this:

Change the device tree

The clock-frequency value is what we want to change. The value is a raw binary unsigned 32-bit int stored big-endian, unreadable for humans.

But you can use another tool fdtget to read just this value decoded:

This will output 100000.

In my case, I wanted to set the I2C bus to the slowest frequency, to compensate for long cable lengths. I found that one of the lowest supported I2C clock frequencies is 4kHz. With fdtput you can set the clock-frequency property for each i2c device (there are 3 on the RPi):

And that’s it. You have to reboot for these settings to take effect.

I used an oscilloscope to verify that the SCL pin of the GPIO of the RPi indeed toggled with 4kHz, and it did!

Raspberry Pi2 and Pi3 running pure Debian and the Linux Mainline/Vanilla Kernel

Update 2017 Feb 25: I have updated the step-by-step instructions based on the suggested fixes and improvements contained in the reader comments. I also have copied the step-by-step instructions from this blog post to the README.md file hosted on https://github.com/michaelfranzl/rpi23-gen-image. From now on I will update the instructions only on github, so expect that the instructions in this blog post will grow slightly out of date.

Update 2017 Mar 4: 64 bit kernel and Debian OS now works on the RPi3.

Update 2018 May: Debian 10 “Buster” works, including wireless LAN on RPi3

 

 

Gallium graphics drivers for Raspberry Pi’s VC4 chip are now fully supported by Linux Mainline

This has been a long way coming. In February 2014, Broadcom announced that they would release the formerly closed-source drivers for the VideoCore IV (VC4) GPU of their BCM283x family of System-on-a-chip (SoC), powering Raspberry Pi’s.

To make a long story short, Eric Anholt started porting the Open Source drivers, as documented by this presentation early 2015, and contributed code to the Linux (Mainline) Kernel, libdrm, Mesa, and X.org. I’m sure it was a long and painful work. But the results are worth it. A picture says more than 1000 words:

Debian 9 ("Stretch") running on a Raspberry Pi 2, powered by Linux 4.9.0-rc3 Mainline/Vanilla Kernel. Notable in this image: Graphics driver is "Gallium" running on the VC4 GPU of the Broadcom 2836 system-on-a-chip (SOC). Glxgears runs with 60 FPS and consumes very little CPU. I2C interface is recognized.
Debian 9 (“Stretch”) running on my Raspberry Pi 2 (and 3), powered by Linux 4.9.0-rc3 Mainline/Vanilla Kernel. Notable in this image: Graphics driver is recognized as “Gallium” running on the VC4 GPU of the Broadcom 2836 system-on-a-chip (SOC). The glxgears benchmark runs with 60 FPS (the vsync of the monitor) and consumes very little CPU. Even the Raspberry I2C interface is recognized by the Linux Mainline Kernel.

To emphasize the point: It is no longer necessary to run specialized distributions (like Raspbian) or Linux kernels (like the Raspbian flavor) in order to have the Raspberry Pi well supported. And this is good news. Debian is a well established and maintained standard Distribution. And even though the Raspberry Pi is not powerful enough for the professional desktop user, it is powerful enough for the casual desktop user, and it is very small and cheap, which opens up a whole lot of possibities for new real-world applications.

I ran an additional test: Gnome even runs on Wayland (modern replacement for the X Window System) on the Raspberry Pi 2 (and 3):

Gnome on Wayland on Raspberry Pi 2
Gnome on Wayland on Raspberry Pi 2

 

 

 

 

Remarks

It still is not a matter of a one-click installer to reproduce these results, you need some experience when you run into barriers. But it has gotten a whole lot easier. Github user drtyhlpr thankfully published the script rpi23-gen-image that can create a standard Debian distribution for the Raspberry Pi that can simply be copied to a SD card.

I have created a fork of this script to use the official Linux kernel instead of the Raspberry flavor one. Above screenshots are taken from a system that I’ve created with this script. My fork is developed into a slightly different direction:

  • Only official Debian releases 9 (“Stretch”) and newer are supported.
  • Only the official/mainline/vanilla Linux kernel is supported (not the raspberry flavor kernel).
  • The Linux kernel must be pre-cross-compiled on the PC running this script (instructions below).
  • Only U-Boot booting is supported.
  • The U-Boot sources must be pre-downloaded and pre-cross-compiled on the PC running this script (instructions below).
  • An apt caching proxy server must be installed to save bandwidth (instructions below).
  • The installation of the system to an SD card is done by simple copying or rsyncing, rather than creating, shrinking and expanding file system images.
  • The FBTURBO option is removed in favor or the working VC4 OpenGL drivers of the mainline Linux kernel.

All of these simplifications are aimed at higher bootstrapping speed and maintainability of the script. For example, we want to avoid testing of all of the following combinations:

RPi2 with u-boot, with official kernel
RPi2 without u-boot, with official kernel
RPi2 with u-boot, with raspberry kernel
RPi2 without u-boot, with raspberry kernel
RPi3 with u-boot, with official kernel
RPi3 without u-boot, with official kernel
RPi3 with u-boot, with raspberry kernel
RPi3 without u-boot, with raspberry kernel

Thus, the script only supports:

RPi2 with u-boot with official kernel
RPi3 with u-boot with official kernel

RPi2 (setting RPI_MODEL=2) is well supported. It will run the arm architecture of Debian, and a 32-bit kernel. You should get very good results, see my related blog posts:

https://blog.michael.franzl.name/2016/10/31/raspberry-pi-debian-stretch/

https://blog.michael.franzl.name/2016/11/10/setting-i2c-speed-raspberry-pi/

https://blog.michael.franzl.name/2016/11/10/reading-cpu-temperature-raspberry-pi-mainline-linux-kernel/

The newer RPi3 (setting RPI_MODEL=3) is supported too. It will run the arm64 architecture of Debian, and a 64-bit kernel. The support of this board by the Linux kernel will very likely improve over time.

In general, this script is EXPERIMENTAL. I do not provide ISO file system images. It is better to master the process rather than to rely on precompiled images. In this sense, use this project only for educational purposes.

How to do it

Basically, we will deboostrap a minimal Debian 9 (“Stretch”) system for the Raspberry on a regular PC running also Debian 9 (“Stretch”). Then we copy that system onto a SD card, then boot it on the Raspberry.

We will work with the following directories:

Set up your working directory:

Do the following steps as root user.

 

Set up caching for apt

This way, you won’t have to re-download hundreds of megabytes of Debian packages from the Debian server every time you run the rpi23-gen-image script.

Check its status page:

 

Install dependencies

The following list of Debian packages must be installed on the build system because they are essentially required for the bootstrapping process.

For a RPi2, you also need:

For a RPi3, you also need:

Kernel compilation

Get the latest Linux mainline kernel. This is a very large download, about 2GB. (For a smaller download of about 90 MB, consider downloading the latest stable kernel as .tar.xz from https://kernel.org.)

Confirmed working revision (approx. version 4.10, Feb 2017): 60e8d3e11645a1b9c4197d9786df3894332c1685

Working configuration files for this Linux kernel revision are included in this repository. (working-rpi2-linux-config.txt and working-rpi3-linux-config.txt).

If you want to generate the default .config file that is also working on the Raspberry, execute

For a RPi2:

For a RPi3:

Whichever .config file you have at this point, if you want to get more control as to what is enabled in the kernel, you can run the graphical configuration tool at this point:

For a RPi2:

For a RPi3:

Before compiling the kernel, back up your .config file so that you don’t lose it after the next make mrproper:

Compiling the kernel

Clean the sources:

Optionally, copy your previously backed up .config:

Find out how many CPU cores you have to speed up compilation:

Run the compilation on all CPU cores. This takes about 10 minutes on a modern PC:

For a RPi2:

For a RPi3:

Verify that you have the required kernel image.

For a RPi2 this is:

For a RPi3 this is:

 

U-Boot bootloader compilation

Confirmed working revision: b24cf8540a85a9bf97975aadd6a7542f166c78a3

Let’s increase the maximum kernel image size from the default (8 MB) to 64 MB. This way, u-boot will be able to boot even larger kernels. Edit ./u-boot/include/configs/rpi.h  and add above the very last line (directly above “#endif”):

Find out how many CPU cores you have to speed up compilation:

Compile for a RPi model 2 (32 bits):

Compile for a RPi model 3 (64 bits):

Verify that you have the required bootloader file:

Pre-download Raspberry firmware

The Raspberry Pi still needs some binary proprietary blobs for booting. Get them:

Confirmed working revision: bf5201e9682bf36370bc31d26b37fd4d84e1cfca

Build the system!

This is where you call the rpi23-gen-image.sh script.

For example:

You may want to modify the variables according to the section “Command-line parameters” below.

The file example.sh in my github repostory contains a working example.

Install the system on a SD card

Insert a SD card into the card reader of your host PC. You’ll need two partitions on it. I’ll leave as an exercise for the reader the creation of a partition table according to the following output of fdisk for a 64GB card:

The following commands will erase all contents of the SD card and install the system (copy via rsync) on the SD card:

Note about SD cards: Cheap (or sometimes even professional) SD cards can be weird at times. I’ve repeatedly noticed corrupt/truncated files even after proper rsync and proper umount on different brand new SD cards. TODO: Add a method to verify all file checksums after rsync.

Try booting the Raspberry

Insert the SD card into the Raspberry Pi, and if everything went well, you should see a console-based login prompt on the screen. Login with the login details you’ve passed into the script (USER_NAME and PASSWORD).

Alternatively, if you have included “avahi-daemon” in your APT_INCLUDES, you don’t need a screen and keyboard and can simply log in via SSH from another computer, even without knowing the Rasberry’s dynamic/DHCP IP address (replace “hostname” and “username” with what you have set as USER_NAME and HOSTNAME above):

Finishing touches directly on the Raspberry

Remember to change usernames, passwords, and SSH keys!

Check uber-low RAM usage

Running top shows that the freshly booted system uses only 23 MB out of the availabl 1GB RAM! Confirmed for both RPi2 and RPi3.

Network Time Synchronization

The Raspberry doesn’t have a real time clock. But the default systemd conveniently syncs time from the network. Check the output of timedatectl. Confirmed working for both RPi2 and RPi3.

Hardware Random Number Generator

The working device node is available at /dev/hwrng. Confirmed working for both RPi2 and RPi3.

I2C Bus

Also try I2C support:

Confirmed working for both RPi2 and RPi3.

Test onboard LEDs

As of the kernel revision referenced above, this only works on the RPi2. The RPi3 has only the red PWR LED on all the time, but otherwise is working fine.

By default, the green onboard LED of the RPi blinks in a heartbeat pattern according to the system load (this is done by kernel feature LEDS_TRIGGER_HEARTBEAT).

To use the green ACT LED as an indicator for disc access, execute:

To toggle the red PWR LED:

Or use the red PWR LED as heartbeat indicator (kernel option for this must be enabled):

Notes about systemd

systemd now replaces decades-old low-level system administration tools. Here is a quick cheat sheet:

Reboot machine:

Halt machine (this actually turns off the RPi):

Show all networking interfaces:

Show status of the Ethernet adapter:

Show status of the local DNS caching client:

Install GUI

Successfully tested on the RPi2 and RPI3.

If you want to install a graphical user interface, I would suggest the light-weight LXDE window manager. Gnome is still too massive to run even on a GPU-accelerated Raspberry.

Reboot, and you should be greeted by the LightDM greeter screen!

Test GPU acceleration via VC4 kernel driver

Successfully tested on the RPi2 and RPI3.

Glxinfo should output:

Kernel compilation directly on the Rasberry

Only successfully tested on the RPi2. Not yet tested on the RPI3.

In case you want to compile and deploy another Mainline Linux kernel directly on the Raspberry, proceed as described above, but you don’t need the ARCH and CROSS_COMPILE flags. Instead, you need the -fno-pic compiler flag for modules. The following is just the compilation step (configuration and installation omitted):

Follow-up articles

Setting I2C bus speed on a Raspberry Pi via Device Tree

Reading Raspberry Pi chip temperature with mainline Linux kernel

Zero Client: Boot kernel and root filesystem from network with a Raspberry Pi2 or Pi3

.

How to convert images to PDF with paper and image size, without ImageMagick

ImageMagick’s convert tool is handy for converting a series of images into a PDF. Just for future reference, here is one method how you can achieve the same without convert. It is useful if you have 1-bit PBM images (e.g. scanned text) at hand:

This command concatenates all .pbm files, pipes the data to pnmtops to create an intermediary PostScript file on-the-fly, and pipes this PostScript file to ps2pdf to create the final pdf.

With the -width and -height parameter you specify the paper size of the generated PDF in inches, which should correspond to the paper size of the original book. With the -imagewidth parameter you specify the width that your scan/photo takes up on the page.

Digitize books: Searchable OCR PDF with text overlay from scanned or photographed books on Linux

Here is my method to digitize books. It is a tutorial about how to produce searchable, OCR (Optical Character Recognition) PDFs from a hardcopy book using free software tools on Linux distributions. You probably can find more convenient proprietary software, but that’s not the objective of this post.

Important: I should not need to mention that depending on the copyright attached to a particular work, you may not be allowed to digitize it. Please inform yourself before, so that you don’t break copyright laws!!!

Digitize books

To scan a book, you basically have 2 choices:

  1. Scan each double page with a flatbed scanner
  2. Take a good photo camera, mount it on a tripod, have it point vertically down on the book, and then take photos of each double page. Professional digitizers use this method due to less strain on the originals.

No matter which method, the accuracy of OCR increases with the resolution and contrast of the images. The resolution should be high enough so that each letter is at least 25 pixels tall.

Since taking a photo is almost instant, you can be much faster with the photographing method than using a flatbed scanner. This is especially true for voluminous books which are hard to repeatedly take on and off a scanner. However, getting sharp high-resolution images with a camera is more difficult than using a flatbed scanner. So it’s a tradeoff that depends on your situation, equitpment and your skills.

Using a flatbed scanner doesn’t need explanation, so I’ll only explain the photographic method next.

Photographing each page

If you use a camera, and you don’t have some kind of remote trigger or interval-trigger at hand, you would need 2 people: someone who operates the camera, and another one who flips the pages. You can easily scan 1 double page every 2 seconds once you get more skilled in the process.

Here are the steps:

  • Set the camera on a tripod and have it point vertically down. The distance between camera and book should be at least 1 meter to approximate orthagonal projection (imitates a flatbed scanner). Too much perspective projection would skew the text lines.
  • Place the book directly under the camera – avoid pointing the camera at any non-90-degree angles that would cause perspective skewing of the contents. Later we will unskew the images, but the less skewing you get at this point, the better.
  • Set up uniform lighting, as bright as you are able. Optimize lighting directions to minimize possible shadows (especially in the book fold). Don’t place the lights near the camera or it will cause reflections on paper or ink.
  • Set the camera to manual mode. Use JPG format. Turn the camera flash off. All pictures need to have uniform exposure characteristics to make later digital processing easier.
  • Maximize zoom so that a margin of about 1 cm around the book is still visible. This way, aligning of the book will take less time. The margin will be cropped later.
  • Once zoom and camera position is finalized, mark the position of the book on the table with tape. After moving the book, place it back onto the original position with help of these marks.
  • Take test pictures. Inspect and optimize the results by finding a balance between the following camera parameters:
    • Minimize aperture size (high f/value) to get sharper images.
    • Maximize ISO value to minimize exposure time so that wiggling of the camera has less of an effect. Bright lighting helps lowering ISO which helps reducing noise.
    • Maximize resolution so that the letter size in the photos is at least 25 pixels tall. This will be important to increase the quality of the OCR step below, and you’ll need a good camera for this.
  • Take one picture of each double page.
One double page of a book that will be digitized. This is actually a scan, but you also can use a good photo camera. Note that the right page is slighty rotated.
One double page of a book that will be digitized. This is actually a scan, but you also can use a good photo camera. Make sure that letters are at least 25 pixels tall. Note that the right page is slighty rotated.

Image Preprocessing

Let’s remember our goal: We want a PDF …

  • which is searchable (the text should be selectable)
  • whose file size is minimized
  • has the same paper size as the original
  • is clearly legible

The following steps are the preprocessing steps to accomplish this. We will use ImageMagick command line tools (available for all platforms) and a couple of other software, all available for Linux distributions.

A note on image formats

Your input files can be JPG or TIFF, or whatever format your scanner or camera support. However, this format must also be supported by ImageMagick. We’ll convert these images to black-and-white PBM images to save space and speed up further processing. PBM is a very simple, uncompressed image format that only stores 1 bit per pixel (2 colors). This image format can be embedded into the PDF directly, and it will be losslessly compressed extremely well, resulting in the smallest possible PDF size.

Find processing parameters by using just a single image

Before we will process all the images as a batch, we’ll just pick one image and find the right processing parameters. Copy one photograph into a new empty folder and do the following steps.

Converting to black and white

Suppose we have chosen one image in.JPG. Run:

Inspect the generated 1blackwhite.pbm file. Optimize the parameters threshold (50% in above example), brightness (0 in above example), and contrast (10 in above example) for best legibiligy of the text.

Black-white conversion of the original image. Contrast and resolution is important.
Black-white conversion of the original image. Contrast and resolution is important.

Cropping away the margins

Next we will crop away the black borders so that the image will correspond to the paper size.

In this example, the cropped image will be a rectangle of 2400×2000 pixels, taken from the offset 760,250 of the input image. Inspect 2cropped.pbm until you get the parameters right, it will be some trial-and-error. The vertical book fold should be very close to the horizontal middle of the cropped image (important for next step).

The cropped image
The cropped image

Split double pages into single pages

This will generate 2 images. Inspect  split0001.pbm and split0002.pbm. You only can use 50% of horizontal cut, otherwise you’ll get more than 2 images.

Left split page
Left split page
Right split page
Right split page

Deskewing the image

Your text lines are probably not exactly horizontal (page angles, camera angles, perspective distortion, etc.). However, having exactly horizontal text lines is very important for accuracy of OCR software. We can deskew an image with the following command:

Inspect the output file 3deskewed.pbm for best results.

The deskewed left page
The deskewed left page
The deskewed right page. Notice that the text lines are now perfectly horizontal. However, deskewing can have its limits, so the original image should already be good!
The deskewed right page. Notice that the text lines are now perfectly horizontal. However, deskewing can have its limits, so the original image should already be good!

Process all the images

Now that you’ve found the paramters that work for you, it’s simple to convert all of your images as a batch, by passing all the paramters at the same time to convert. Run the following in the folder where you stored all the JPG images (not in the folder where you did the previous single-image tests):

Now, for each .JPG input file, we’ll have two  .pbm output files. Inspect all .pbm files and make manual corrections if needed.

Note: If you have black borders on the pages, consider using unpaper to remove them. I’ll save writing about using unpaper for a later time.

 

Producing OCR PDFs with text overlay

The tesseract OCR engine can generate PDFs with a selectable text layer directly from our PBM images. Since OCR is CPU intensive, we’ll make use of parallel processing on all of our CPU cores with the parallel tool. You can install both by running

 

For each PBM file, create one PDF file:

To merge all the PDF files into one, run pdfunite from the poppler-utils package:

Success! And this is our result:

The PDF has been OCR'd and a selectable text layer has been generated
The PDF has been OCR’d and a selectable text layer has been generated

 

100% HTTPS in the internet? Non-Profit makes it possible!

HTTPS on 100% of websites in the internet? This just has gotten a lot easier! Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG), a Section 501(c)(3) Non-Profit entity dedicated to reduce financial, technological, and education barriers to secure communication over the Internet.

Let’s Encrypt offers free-of-cost certificates that can be used for HTTPS websites, even when these websites are ran for commercial purposes. Unlike traditional CA’s they don’t require cumbersome registration, paperwork, set-up and payment. The certificates are fetched in an automated way through an API (the ACME Protocol — Automatic Certificate Management Environment), which includes steps to prove that you have control over a domain.

Dedicated to transparency, generated certificates are registered and submitted to Certificate Transparency logs. Here is the generous legal Subscriber Agreement.

Automated API? This sounds too complicated! It is actually not. There are a number of API libraries and clients available that do the work for you. One of them is Certbot. It is a regular command-line program written in Python and the source code is available on Github.

After downloading the certbot-auto script (see their documentation), fetching certificates consists of just one command line (in this example certs for 3 domains are fetched in one command with the -d switch):

With the -w  flag you tell the script where to put temporary static files (a sub-folder .well-known  will be created) that, during the API control flow, serve as proof to the CA’s server that you have control over the domain. This is identical to Google’s method of verifying a domain for Google Analytics or Google Webmaster Tools by hosting a static text file.

Eventually, the (already chained, which is nice!) certificate and private key are copied into /etc/letsencrypt/live/example.com/ :

Then it is only a matter of pointing your web server (Nginx, Apache, etc.) to these two files, and that’s trivial.

Let’s Encrypt certificates are valid for 90 days. The automatic renewal of ALL certificates that you have loaded to your machine is as easy as …

… which they suggest should be put into a Cron job, run twice daily. It will renew the certificates just in time. No longer do you have to set a reminder in your calendar to renew a certificate, and then copy-paste it manually!

A bit of a downside is that Let’s Encrypt unfortunately doesn’t support wildcard domain certificates. For these, you still have to pay money to some other CA’s who support them. But in above shown code example, you would generate only 1 certificate for the domains example.com and its two subdomains www.example.com and blah.example.com. The two subdomains are listed in the Subject Alternative Name field of the certificate, which is as close to wildcard subdomains as it gets. But except for SAAS providers and other specialized businesses, not having wildcard certificates should not be too big of an issue, especially when one can automate the certificate setup.

On the upside, they even made sure that their certificates work down to Windows XP!

Today, I set up 3 sites with Let’s Encrypt (one of them had several subdomains), and it was a matter of a few minutes. It literally took me longer to configure proper redirects in Nginx (no fault of Nginx, I just keep forgetting how it’s done properly) than to fetch all the certificates. And it even gave me time to write this blog post!

Honestly, I never agreed with the fact that for commercial certificate authorities, one has to pay 1000, 100 or even 30 bucks per certificate per year. Where’s the work invested into such a certificate that is worth so much? The generation of a certificate is automated, and is done in a fraction of a second on the CPU. Anyway, that now seems to be a thing of the past.

A big Thumbs-up and Thanks go to the Let’s Encrypt CA, the ISRG, and to Non-Profit enterprises in general! I believe that Non-Profits are the Magic Way of the Future!

Icon made by Freepik from www.flaticon.com 

Resume rsync transfers with the –partial switch

Recently I wanted to rsync a 16GB file to a remote server. The ETA was calculated as 23 hours and, as it usually happens, the file transfer aborted after about 20 hours due to a changing dynamic IP address of the modem. I thought: “No problem, I just re-run the rsync command and the file transfer will resume where it left off.” Wrong! Because by default, on the remote side, rsync creates a temporary file beginning with a dot, and once rsync is aborted during the file transfer, this temporary file is deleted without a trace! Which, in my case, meant that I wasted 20 hours of bandwidth.

It turns out that one can tell rsync to keep temporary files by passing the argument --partial . I tested it, and the temporary file indeed is kept around even after the file transfer aborts prematurely. Then, when simply re-running the same rsync command, the file transfer is resumed.

In my opinion, rsync should adopt this behavior by default. Simple thing to fix, but definitely an argument that should be passed every time!

Added: Simply use -P  ! This implies --partial  and you’ll also see a nice progress output for free!

Unprivileged Unix Users vs. Untrusted Unix Users. How to harden your server security by confining shell users into a minimal jail

As a server administrator, I recently discovered a severe oversight of mine, one that was so big that I didn’t consciously see it for years.

What can Unprivileged Unix Users do on your server?

Any so-called “unprivileged Unix users” who have SSH access to a server (be it simply for the purpose of rsync’ing files) is not really “unprivileged” as the word may suggest. Due to the world-readable permissions of many system directories, set by default in many Linux distributions, such an user can read a large percentage of directories and files existing on the server, many of which can and should be considered as a secret. For example, on my Debian system, the default permissions are:

/etc: world-readable including most configuration files, amongst them passwd which contains plain-text names of other users
/boot: world-readable including all files
/home: world-readable including all subdirectories
/mnt: world-readable
/src: world-readable
/srv: world-readable
/var: world-readable
etc.

Many questions are asked about how to lock a particular user into their home directory. User “zwets” on askubuntu.com explained that this is besides the point and even “silly”:

A user … is trusted: he has had to authenticate and runs without elevated privileges. Therefore file permissions suffice to keep him from changing files he does not own, and from reading things he must not see. World-readability is the default though, for good reason: users actually need most of the the stuff that’s on the file system. To keep users out of a directory, explicitly make it inaccessible. […]

Users need access to commands and applications. These are in directories like /usr/bin, so unless you copy all commands they need from there to their home directories, users will need access to /bin and /usr/bin. But that’s only the start. Applications need libraries from /usr/lib and /lib, which in turn need access to system resources, which are in /dev, and to configuration files in /etc. This was just the read-only part. They’ll also want /tmp and often /var to write into. So, if you want to constrain a user within his home directory, you are going to have to copy a lot into it. In fact, pretty much an entire base file system — which you already have, located at /.

I agree with this assessment. Traditionally, on shared machines, users needed to have at least read access to many things. Thus, once you give someone even just “unprivileged” shell access, this user — not only including his technical knowledge but also the security of the setup of his own machine which might be subject to exploits — is still explicitly and ultimately trusted to see and handle confidentially all world-readable information.

The problem: Sometimes, being “unprivileged” is not enough. The expansion towards “untrusted” users.

As a server administrator you sometimes have to give shell access to some user or machine who is not ultimately trusted. This happens in the very common case where you transfer files via rsync  (backups, anyone?) to/from machines that do not belong to you (e.g. a regular backup service for clients), or even those machines which do belong to you but which are not physically guarded 24/7 against untrusted access. rsync  however requires shell access, period. And if that shell access is granted, rsync  can read out all world-readable information, and for that matter, even when you have put into place rbash  (“restricted bash”) or rssh  (“restricted ssh”) as a login shell. So now, in our scenario, we are facing a situation where someone ultimately not trusted can rsync all world-readable information from the server to anywhere he wants to simply by doing:

One may suggest to simply harden the file permissions for those untrusted users, and I agree that this is a good practice in any case. But is it practical? Hardening the file permissions of dozens of configuration files in /etc  alone is not an easy task and is likely to break things. For one obvious example: Do I know, without investing a lot of research (including trial-and-error), which consequences chmod o-rwx /etc/passwd  will have? Which programs for which users will it break? Or worse, will I even be able to reboot the system?

And what if you have a lot of trusted users working on your server, all having created many personal files, all relying on the world-readable nature as a way to share those files, and all assuming that world-readable does not literally mean ‘World readable’? Grasping the full extent of the user collaboration and migrating towards group-readable instead of world-readable file permissions likely will be a lot of work, and again, may break things.

In my opinion, for existing server machines, this kind of work is too expensive to be justified by the benefits.

So, no matter from which angle you look at this problem, having ultimately non-trusted users on the system is a problem that can only be satisfactorily solved by jailing them into some kind of chroot directory, and allowing only those tasks that are absolutely neccessary for them (in our scenario, rsync  only). Notwithstanding that, and to repeat, users who are not jailed must be considered as ultimately trusted.

The solution: Low-level system utilities and a minimal jail

For above reasons regarding untrusted users, ‘hardening’ shell access via rbash  or even rssh  is just a cosmetic measure that still doesn’t prevent world-readable files to be literally readable by the World (you have to assume that untrusted users will share data liberally). rssh  has a built-in feature for chroot’ing, but it was originally written for RedHat and the documentation about it is vague, and it wouldn’t even accept a chroot environment created by debootstrap.

Luckily, there is a low-level solution, directly built into the Linux kernel and core packages. We will utilize the ability of PAM to ‘jailroot’ a SSH session on a per-user basis, and we will manually create a very minimal chroot jail for this purpose. We will jail two untrusted system users called “jailer” and “inmate” and re-use the same jail. Each user which will be able to rsync  files, but either will not be able to escape the jail, nor see the files of the other.

The following diagram shows the directory structure of the jail that we will create:

The following commands are based on Debian and have been tested in Debian Wheezy.

First, create user accounts for two ultimately untrusted users, called “jailer” and “inmate” (note that the users are members of the host operating system, not the jail):

Their home directories will be /home/jailer  and /home/inmate  respectively. They need home directories so that you can set up SSH keys (via ~/.ssh/authorized_keys ) for passwordless-login later.

Second, install the PAM module that allows chroot’ing an authenticated session:

The installed configuration file is /etc/security/chroot.conf . Into this configuration file, add

These two lines mean that after completing the SSH authentication, the users jailer and inmate will be jailed into the directory /home/minjail of the host system. Note that both users will share the same jail.

Third, we have to enable the “chroot” PAM module for SSH sessions. For this, edit /etc/pam.d/sshd  and add to the end

After saving the file, the changes are immediately valid for the next initiated SSH session — thankfully, there is no need to restart any service.

Making a minimal jail for chroot

All that is missing now is a minimal jail, to be made in /home/minjail . We will do this manually, but it would be easy to make a script that does it for you. In our jail, and for our described scenario, we only need to provide rsync  to the untrusted users. And just for experimentation, we will also add the ls  command. However, the policy is: Only add into the jail what is absolutely neccessary. The more binaries and libraries you make available, the higher the likelihood that bugs may be exploited to break out of the jail. We do the following as the root user:

Next, create the home directories for both users:

Next, for each binary you want to make available, repeat the following steps (we do it here for rsync , but repeat the steps for ls ):

1. Find out where a particular program lives:

2. Copy this program into the same location in the jail:

3. Find out which libraries are used by that binary

4. Copy these libraries into the corresponding locations inside of the jail (linux-gate.so.1 is a virtual file in the kernel and doesn’t have to be copied):

After these 4 steps have been repeated for each program, finish the minimal jail with proper permissions and ownership:

The permission 751 of the ./home  directory ( drwxr-x--x  root  root) will allow any user to enter the directory, but forbid do see which subdirectories it contains (information about other users is considered private). The permission 750 of the user directories ( drwxr-x--- ) makes sure that only the corresponding user will be able to enter.

We are all done!

Test the jail from another machine

As stated above, our scenario is to allow untrusted users to rsync  files (e.g. as a backup solution). Let’s try it, in both directions!

Testing file upload via rsync

Both users “jailer” and “inmate” can rsync a file into their respective home directory inside the jail. See:

To allow password-less transfers, set up a public key in /home/jailer/.ssh/authorized_keys  of the host operating system.

Testing file download via rsync

This is the real test. We will attempt do download as much as possible with rsync (we will try to get the root directory recursively):

Here you see that all world-readable files were transferred (the programs ls and rsync and their libraries), but nothing from the home directory inside of the jail.

However, rsync  succeeds to grab the user’s home directory. This is expected and desired behavior:

 

Testing shell access

We have seen that we cannot do damage or reveal sensitive information with rsync . But as stated above, rsync  cannot be had without shell access. So now, we’ll log in to a bash shell and see which damage we can do:

Put "/bin/bash -i"  as an argument to use the host system’s bash in interactive mode, otherwise you would have to set up special device nodes for the terminal inside of the jail, which makes it more vulnerable for exploits.

We are now dumped to a primitive shell:

At this point, you can explore the jail. Try to do some damage (Careful! Make sure you’re not in your live host system, prefer an experimental virtual machine instead!!!) or try to read other user’s files. However, you will likely not succeed, since everything you have available is Bash’s builtin commands plus rsync  and ls , all chroot’ed by a system call to the host’s kernel.

If any reader of this article should discover exploits of this method, please leave a comment.

Conclusion

I have argued that the term “unprivileged user” on a Unix-like operating system can be misunderstood, and that the term “untrusted user” should be introduced in certain use cases for clarity. I have presented an outline of an inexpensive method to accomodate untrusted users on a shared machine for various purposes with the help of the low-level Linux kernel system call chroot()  through a PAM module called pam_chroot.so  as well as a minimal, manually created jail. This method still is experimental and has not entirely been vetted by security specialists.

 

 

Obscure error: dd: failed to open: No medium found

I got this error message when I wanted to use dd  to copy a raw image to a MicroSD card. You will see this message if you have unmounted the card with a file manager like Nautilus. I guess it unmounts not fully, or in a special way.

The fix is easy:

Unmount the card via the terminal:

The dd  command will work now.

Another variant of this error can be seen when using bmaptool :

Exim and Spamassassin: Rewriting headers, adding SPAM and Score to Subject

This tutorial is a follow-up to my article Setting up Exim4 Mail Transfer Agent with Anti-Spam, Greylisting and Anti-Malware.

I finally got around solving this problem: If an email has a certain spam score, above a certain threshold, Exim should rewrite the Subject header to contain the string  *** SPAM (x.x points) *** {original subject}

Spamassassin has a configuration option to rewrite a subject header in its configuration file /etc/spamassassin/local.cf  …

… but this is misleading, because it is used only when Spamassassin is used stand-alone. If used in combination with a MTA (Mail Transfer Agent) like Exim, the MTA is ultimately responsible for modifying emails. So, the solution lies in the proper configuration of Exim. To modify an already accepted message, the Exim documentation suggests a System Filter. You can set it up like this:

Enable the system filter in your main Exim configuration file. Add to it:

Then create the file  /etc/exim4/system.filter , set proper ownership and permission, then insert:

This means: If the header  $header_X-Spam_score_int  is present (has been added by Exim in the acl_check_data  ACL section, see my previous tutorial), and is more than 50 (this is 5.0), rewrite the Subject header. The regular expression checks if the spam score is valid and not negative.

Note that in the acl_check_data section of the Exim config, you can deny a message above a certain spam score threshold. This means, in combination with this System Filter, you can do the following:

  • If spam score is above 10, reject/bounce email from the ACL.
  • If spam score is above 5, rewrite the Subject.

Teamviewer alternative: How to get a Remote Desktop VNC connection via SSH over an intermediate server, avoiding firewalls

What to do when Teamviewer suddenly doesn’t connect or you can’t or don’t want to use it for other reasons? What if a friend needs urgent assistance and you need to see his screen to help out? Standard Open Source tools to the rescue! If you know how to use SSH from the command line and have access to an user account on a remote server running SSHD, then this article will save you!

Teamviewer-like products are so popular because they solve a real problem: To connect from one household computer to another is not straightforward because we all use regular modems/routers to connect to the internet. They are usually not set up for Port Forwarding. In laymans terms this means: You can initiate a connection to  the internet, but the internet never can initiate a connection to you. This quandary has only one solution: an intermediary is needed. Teamviewer-like products all offer such intermediary servers. Data is never directly sent from one computer to the other. Data is passed through a server (which allows for all kinds of spying by the way, if the encryption is not good enough!)

There are tons of (unfortunately for me) half-working instructions in the web about how the commands must look like. I took me several hours to wrap my mind around the problem. For this article, I’ve decided to make an image gallery so that you can see easier what is going on behind the scenes. Running just 4 commands will do the trick!

The following list and image shows the starting point of our situation:

  • We are sitting in front of the computer called “sittinghere”.
  • We want to see the screen of the computer called “overthere”.
  • There is a server called “hopper” which runs an ssh daemon (sshd) listening on the standard port 22. Our data will ‘hop’ though there, hence the name “hopper”.

 

The initial battlefield: Regular firewalls are installed for "sittinghere" and "overthere". "hopper" has a firewall with only port 22 open.
The initial situation: Regular firewalls are constrict “sittinghere” and “overthere”. “hopper” has a firewall with only port 22 open.

We instruct our friend, sitting in front of the computer “overthere”, to start any kind of VNC server. There are many products for Windows, Mac OS, and Linux available. I’m using Linux and decided for x11vnc because it allows access to the screen of the currently logged in user instead of starting a new user session. The command is:

After this command, our situation has changed. See the following image. The VNC server has started listening on port 5900 (see lower right corner):

 

A VNC server is started "overthere". It listens on port 5900.
A VNC server is started “overthere”. It listens on port 5900.

Now we have to instruct our friend, sitting on “overthere”, to run a ssh command to connect to “hopper”. You can make a little batch script for this to make it more convenient, but this is not the focus of this article. The command is:

With this command, sshd on “hopper” is instructed to create a socket on port 7001. And when someone connects to this port, it will be as if the same connection had been made on “overthere” port 5900. In the code above, “localhost” refers to the computer “overthere” because the command is executed on this computer.

Let’s see how our situation now looks like:

 

An ssh client is started "overthere" with Remote Port Forwarding options. A connection is made to sshd, running on "hopper". sshd opens a port localhost:7001 and listens for incoming connections from localhost.
An ssh client is started “overthere” with Remote Port Forwarding options. A connection is made to sshd, running on “hopper”. sshd opens a port on localhost:7001 and listens for incoming connections.

Now that “overthere” has done its connection to “hopper”,  it is time for “sittinghere” to connect to “hopper” too. Here is the command:

This means: A socket on port 7002 will be created on localhost (“sittinghere”). If someone connects to this port, it will be as if the same connection had been made on port 7001 on “hopper”. Here’s the new situation:

 

"sittinghere" starts ssh with Local Port Forwarding options. It connects to sshd running on port 22 of "hopper". It also creates port 7002 on localhost and listens for incoming connections.
“sittinghere” starts ssh with Local Port Forwarding options. It connects to sshd running on port 22 of “hopper”. It also creates port 7002 on localhost and listens for incoming connections.

Now, remember that a connection to port 7001 on “hopper” will have the same effect as if the same connection had been made on “overthere”. This means we will have a kind of ripple-through effect: As soon as we connect to port 7002 on “sittinghere”, the connection will ripple through until it connects to port 5900 of “overthere”. And this endpoint is our VNC server.

Now let’s connect with a VNC client from “sittinghere” to “overthere”. You will find lots of good VNC clients in the internet. Just connect to

which is equivalent to

This will start our connection cascade, and the final situation looks like this:

 

On "sittinghere", a VNC viewer is started and told to connect to localhost port 7002. This connection is detected by ssh which makes sshd running on "hopper" connect to port 7001, which causes ssh running on "overthere" connect to port 5900. Now the route is complete and we can see the remote screen!
On “sittinghere”, a VNC viewer is started and told to connect to localhost port 7002. This connection is detected by ssh which makes sshd running on “hopper” connect to port 7001, which causes ssh running on “overthere” to connect to port 5900. Now the route is complete and we can see the remote screen!

 

Congratulations, we’re now all connected through! If you have set a password (we’ve set it to “PASSWORD” in the above example), you now will see the remote screen. Note that we didn’t even have to execute a command on “hopper”, nor even reconfigure any firewall! My experience is that the quality and speed of the transmitted remote image is much better with this method than with commercial products. Of course, this is because commercial products have to divide and share bandwidth across all users.

The SSH connections will stay alive until you terminate them. So, just type “exit” in the ssh terminals opened at “sittinghere” and “overthere” that we have been opened previously (to “hopper”).

I’m glad I don’t have to rely on commercial products only any more!