How to compile ezstream from source

Debian Stretch’s version of ezstream is currently a bit out of date. Here is how you compile ezstream from source to get the latest improvements and bugfixes. Not even the INSTALL file in the ezstream repo has all the steps:

apt-get install libshout3-dev libxml2-dev libtag1-dev libshout3-dev libvorbis-dev libogg-dev check libtag-extras-dev libtagc0-dev

git clone https://github.com/xiph/ezstream.git

cd ezstream

libtoolize --force
aclocal
autoheader
automake --force-missing --add-missing
autoconf
autoreconf -f

./configure
make
make install

Note that the configuration file structure has changed from what can be found on older blog posts on the internet. For example, to pipe OGG Vorbis data into ezstream without re-encoding, you can use something like teststream.xml:

<ezstream>
  <server>
    <hostname>media.example.com</hostname>
    <password>hackme</password>
  </server>
  
  <stream>
    <mountpoint>test.ogg</mountpoint>
    <format>Vorbis</format>
  </stream>
  
  <media>
    <type>stdin</type>
    <filename>stdin</filename>
    <stream_once>1</stream_once>
  </media>
</ezstream>

Then, to stream 30 seconds of brown noise with a sine sweep to an Icecast server for testing purposes:

sox --null -p synth 00:00:30 brownnoise synth 00:00:30 sine 300-3000 | \
sox -r 48k -t raw -e signed -b 16 -c 1 -V1 - -r 48000 -t ogg - | \
ezstream -vvc teststream.xml

 

Hardening WordPress against hacking attempts

The WordPress Codex states:

Security in WordPress is taken very seriously

This may be the case, but in reality, you yourself have to take some additional measures so that you won’t have a false sense of security.

With the default settings of WordPress and PHP, the minute you host Wordpress and give access to one non-security-conscientious administrative user, your entire hosting environment should be considered as compromised.

The general problem with WordPress and PHP is that rather than thinking about which few essential features to turn on (whitelisting), you have to think about dozens of insecure features to turn off (blacklisting).

This excellent article (“Common WordPress Malware Infections”) gives you an overview what you’re up against when it comes to protecting WordPress from Malware.

Below are a couple of suggestions that should be undertaken, starting with the most important ones.

Disable WordPress File Editing

WordPress comes with the PHP file editor enabled by default. One of the most important rules of server security is that you never, ever, allow users to execute arbitrary program code. This is just inviting desaster. All it takes is the admin password to be stolen/sniffed/guessed to allow the WordPress PHP code to be injected with PHP malware. Then, if you haven’t taken other restricting measures in PHP.ini (see section below), PHP may now

  • Read all readable files on your entire server
    • Include /etc/passwd and expose the names of all user accounts publicly
    • Read database passwords from wp-config.php of all other WordPress installations and modify or even delete database records
    • Read source code of other web applications
    • etc.
  • Modify writable files
    • Inject more malware
    • etc.
  • Use PHP’s curl functions to make remote requests
    • Turns your server into part of a botnet

So amongst the first things to do when hosting WordPress, is to disable file editing capabilities:

define('DISALLOW_FILE_EDIT', true);

But that measure assumes that WordPress plus third-party Plugins are secure enough to improve their own security, which one cannot assume, so it is better to…

The “Big Stick”: Remove Write File Permissions

I’ll posit here something that I believe to be self-evident:

It is safer to make WordPress files read-only and thus disallow frequent WordPress (and third-party Plugin) upgrades than it is to allow Wordpress (and third-party Plugins) to self-modify.

Until I learn that this postulate is incorrect, I’ll propose that you make all WordPress files (with the obvious exception of the uploads directory) owned by root, and writable only to root, interpreted by a non-root user. This will leverage the security inherent in the Linux Kernel:

find . -type d -exec chmod 755 {} \;
find . -type f -exec chmod 644 {} \;
chown -R root:root .
chown -R www-data:www-data wp-content/uploads

Note that you still can upgrade WordPress from time to time manually. You could even write a shell script for it.

Restrict serving of files

Disable direct access to wp-config.php which contains very sensitive information which would be revealed should the PHP not be processed correctly. In Nginx:

location = /wp-config.php {
    deny all;
}

Disable PHP execution in the uploads directory. In Nginx:

location ~* /(?:uploads|files)/.*.php$ {
    deny all;
}

Restrict PHP

I’ll refer the reader to already written excellent external articles –  please do implement the suggestions therein:

Hardening PHP from PHP.ini

25 PHP Security Best Practices

Host WordPress in a Virualization Environment

In addition to all of the above, any kind of publicly exposed web application (not just WordPress) should really be hosted in an isolated environment. Docker seems promising for this purpose. I found the following great external tutorial about generating a LAMP Docker image:

WordPress Developer’s Intro To Docker

WordPress Developer’s Intro To Docker, Part Two

 

no.php – Transparent reverse proxy written in PHP that allows you to not have to write PHP any more

This little project will probably be my only contribution to the world of PHP.

The code is at https://github.com/michaelfranzl/no.php

This short, single-file, 80-line PHP script is a simple and fully transparent HTTP(S) reverse proxy written in PHP that allows you to never have to use PHP again for a new project, if you feel so inclined, for example if you are forced to host on a fully 3rd-party-managed server where you can’t do more than run PHP and upload files via FTP. The PHP script simply reads all requests from a browser pointed to it, forwards them (via PHP’s curl library) to a web application listening at another URL (e.g. on a more powerful, more secure, more private, or more capable server in a different data center), and returns the responses transparently and unmodified.

Supports:

  • Regular and XMLHttpRequests (AJAX)
  • All HTTP headers without discrimination
  • GET and POST verbs
  • Content types (HTTP payload) without discrimination
  • Redirects (internal redirects are rewritten to relative URIs)

Does not support (or not tested):

  • HTTP verbs other than GET and POST (but these are usually emulated anyway)
  • HTTP greater than version 1.1 (e.g. reusable connections)
  • Upgrade to websocket (persistent connections)
  • Multipart content type

Usage illustrated by the standard example

You have a non-PHP web application (called the “backend”) listening on https://myapp.backend.com:3000 but due to constraints you must make it available on a shared hosting server called https://example.com/subdir which only supports PHP and can’t be configured at all. On latter server, Apache (or Nginx, doesn’t matter) will usually do the following:

  1. If a URI points to a .php file, this file will be interpreted
  2. If a URI points to a file that is not existing, a 404 status will be returned.

Using no.php, to accomodate the second case, all URIs of the proxied web app (including static files) must be appended to the URI https://example.com/subdir/no.php. For example:

https://example.com/subdir/no.php/images/image.png
https://example.com/subdir/no.php/people/15/edit

If your backend app supports that extra /subdir/no.php prefix to all paths, you are all set and ready to use no.php. Then:

  1. Simply copy no.php into the subdir directory of example.com
  2. Change $backend_url in no.php to "https://myapp.backend.com:3000"
  3. Point a browser to https://example.com/subdir/no.php

Project status

Experimental. Use only if you know what you are doing.

How to convert images to PDF with paper and image size, without ImageMagick

ImageMagick’s convert tool is handy for converting a series of images into a PDF. Just for future reference, here is one method how you can achieve the same without convert. It is useful if you have 1-bit PBM images (e.g. scanned text) at hand:

cat *.pbm | pnmtops -setpage -width 6 -height 8 -imagewidth 3 - | ps2pdf -dEPSFitPage - > book.pdf

This command concatenates all .pbm files, pipes the data to pnmtops to create an intermediary PostScript file on-the-fly, and pipes this PostScript file to ps2pdf to create the final pdf.

With the -width and -height parameter you specify the paper size of the generated PDF in inches, which should correspond to the paper size of the original book. With the -imagewidth parameter you specify the width that your scan/photo takes up on the page.

100% HTTPS in the internet? Non-Profit makes it possible!

HTTPS on 100% of websites in the internet? This just has gotten a lot easier! Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG), a Section 501(c)(3) Non-Profit entity dedicated to reduce financial, technological, and education barriers to secure communication over the Internet.

Let’s Encrypt offers free-of-cost certificates that can be used for HTTPS websites, even when these websites are ran for commercial purposes. Unlike traditional CA’s they don’t require cumbersome registration, paperwork, set-up and payment. The certificates are fetched in an automated way through an API (the ACME Protocol — Automatic Certificate Management Environment), which includes steps to prove that you have control over a domain.

Dedicated to transparency, generated certificates are registered and submitted to Certificate Transparency logs. Here is the generous legal Subscriber Agreement.

Automated API? This sounds too complicated! It is actually not. There are a number of API libraries and clients available that do the work for you. One of them is Certbot. It is a regular command-line program written in Python and the source code is available on Github.

After downloading the certbot-auto script (see their documentation), fetching certificates consists of just one command line (in this example certs for 3 domains are fetched in one command with the -d switch):

certbot-auto certonly --webroot -w /var/www/example -d example.com -d www.example.com -d blah.example.com

With the -w  flag you tell the script where to put temporary static files (a sub-folder .well-known  will be created) that, during the API control flow, serve as proof to the CA’s server that you have control over the domain. This is identical to Google’s method of verifying a domain for Google Analytics or Google Webmaster Tools by hosting a static text file.

Eventually, the (already chained, which is nice!) certificate and private key are copied into /etc/letsencrypt/live/example.com/ :

fullchain.pem
privkey.pem

Then it is only a matter of pointing your web server (Nginx, Apache, etc.) to these two files, and that’s trivial.

Let’s Encrypt certificates are valid for 90 days. The automatic renewal of ALL certificates that you have loaded to your machine is as easy as …

./certbot-auto renew

… which they suggest should be put into a Cron job, run twice daily. It will renew the certificates just in time. No longer do you have to set a reminder in your calendar to renew a certificate, and then copy-paste it manually!

A bit of a downside is that Let’s Encrypt unfortunately doesn’t support wildcard domain certificates. For these, you still have to pay money to some other CA’s who support them. But in above shown code example, you would generate only 1 certificate for the domains example.com and its two subdomains www.example.com and blah.example.com. The two subdomains are listed in the Subject Alternative Name field of the certificate, which is as close to wildcard subdomains as it gets. But except for SAAS providers and other specialized businesses, not having wildcard certificates should not be too big of an issue, especially when one can automate the certificate setup.

On the upside, they even made sure that their certificates work down to Windows XP!

Today, I set up 3 sites with Let’s Encrypt (one of them had several subdomains), and it was a matter of a few minutes. It literally took me longer to configure proper redirects in Nginx (no fault of Nginx, I just keep forgetting how it’s done properly) than to fetch all the certificates. And it even gave me time to write this blog post!

Honestly, I never agreed with the fact that for commercial certificate authorities, one has to pay 1000, 100 or even 30 bucks per certificate per year. Where’s the work invested into such a certificate that is worth so much? The generation of a certificate is automated, and is done in a fraction of a second on the CPU. Anyway, that now seems to be a thing of the past.

A big Thumbs-up and Thanks go to the Let’s Encrypt CA, the ISRG, and to Non-Profit enterprises in general! I believe that Non-Profits are the Magic Way of the Future!

Icon made by Freepik from www.flaticon.com 

OpenGL programming in Python: pyglpainter

This was a recent hobby programming project of mine for use in a CNC application, using Python and OpenGL. The source code is available at https://github.com/michaelfranzl/pyglpainter .

This Python module provides the class PainterWidget, extending PyQt5’s QGLWidget class with boilerplate code neccessary for applications which want to build a classical orthagnoal 3D world in which the user can interactively navigate with the mouse via the classical (and expected) Pan-Zoom-Rotate paradigm implemented via a virtual trackball (using quaternions for rotations).

This class is especially useful for technical visualizations in 3D space. It provides a simple Python API to draw raw OpenGL primitives (LINES, LINE_STRIP, TRIANGLES, etc.) as well as a number of useful composite primitives rendered by this class itself (Grid, Star, CoordSystem, Text, etc., see files in classes/items). As a bonus, all objects/items can either be drawn as real 3D world entities which optionally support “billboard” mode (fully camera-aligned or arbitrary- axis aligned), or as a 2D overlay.

It uses the “modern”, shader-based, OpenGL API rather than the deprecated “fixed pipeline” and was developed for Python version 3 and Qt version 5.

Model, View and Projection matrices are calculated on the CPU, and then utilized in the GPU.

Qt has been chosen not only because it provides the GL environment but also vector, matrix and quaternion math. A port of this Python code into native Qt C++ is therefore trivial.

Look at example.py, part of this project, to see how this class can be used. If you need more functionality, consider subclassing.

Most of the time, calls to item_create() are enough to build a 3D world with interesting objects in it (the name for these objects here is “items”). Items can be rendered with different shaders.

This project was originally created for a CNC application, but then extracted from this application and made multi-purpose. The author believes it contains the simplest and shortest code to quickly utilize the basic and raw powers of OpenGL. To keep code simple and short, the project was optimized for technical, line- and triangle based primitives, not the realism that game engines strive for. The simple shaders included in this project will draw aliased lines and the output therefore will look more like computer graphics of the 80’s. But “modern” OpenGL offloads many things into shaders anyway.

This class can either be used for teaching purposes, experimentation, or as a visualization backend for production-class applications.

Mouse Navigation

Left Button drag left/right/up/down: Rotate camera left/right/up/down

Middle Button drag left/right/up/down: Move camera left/right/up/down

Wheel rotate up/down: Move camera ahead/back

Right Button drag up/down: Move camera ahead/back (same as wheel)

The FOV (Field of View) is held constant. “Zooming” is rather moving the camera forward alongs its look axis, which is more natural than changing the FOV of the camera. Even cameras in movies and TV series nowadays very, very rarely zoom.

 

Exim and Spamassassin: Rewriting headers, adding SPAM and Score to Subject

This tutorial is a follow-up to my article Setting up Exim4 Mail Transfer Agent with Anti-Spam, Greylisting and Anti-Malware.

I finally got around solving this problem: If an email has a certain spam score, above a certain threshold, Exim should rewrite the Subject header to contain the string *** SPAM (x.x points) *** {original subject}

Spamassassin has a configuration option to rewrite a subject header in its configuration file /etc/spamassassin/local.cf  …

rewrite_header Subject ***SPAM***

… but this is misleading, because it is used only when Spamassassin is used stand-alone. If used in combination with a MTA (Mail Transfer Agent) like Exim, the MTA is ultimately responsible for modifying emails. So, the solution lies in the proper configuration of Exim. To modify an already accepted message, the Exim documentation suggests a System Filter. You can set it up like this:

Enable the system filter in your main Exim configuration file. Add to it:

system_filter = /etc/exim4/system.filter
system_filter_user = Debian-exim

Then create the file /etc/exim4/system.filter , set proper ownership and permission, then insert:

if $header_X-Spam-Score matches "^[^-0][0-9\.]+" and ${sg{$header_X-Spam-Score:}{\\.}{}} is above 50
then
headers add "Old-Subject: $h_subject"
headers remove "Subject"
headers add "Subject: *** SPAM ($header_X-Spam_score points) *** $h_old-subject"
headers remove "Old-Subject"
endif

This means: If the header $header_X-Spam_score_int  is present (has been added by Exim in the acl_check_data  ACL section, see my previous tutorial), and is more than 50 (this is 5.0), rewrite the Subject header. The regular expression checks if the spam score is valid and not negative.

Note that in the acl_check_data section of the Exim config, you can deny a message above a certain spam score threshold. This means, in combination with this System Filter, you can do the following:

  • If spam score is above 10, reject/bounce email from the ACL.
  • If spam score is above 5, rewrite the Subject.

XeLaTeX: Unicode font fallback for unsupported characters

Traditionally I only used to use LaTeX to typeset documents, and it works perfectly when you have a single language script (e.g. only English or German). But as soon as you want to typeset Unicode text in multiple languages, you’re quickly out of luck. LaTeX is just not made for Unicode, and you need a lot of helper packages, documentation reading, and complicated configuration in your document to get it all right.

All I wanted was to typeset the following Unicode text. It contains regular latin characters, chinese characters, modern greek and polytonic (ancient) greek.

Latin text. Chinese text: 紫薇北斗星  Modern greek: Διαμ πριμα εσθ ατ, κυο πχιλωσοπηια Ancient greek: Μῆνιν ἄειδε, θεά, Πηληϊάδεω Ἀχιλῆος. And regular latin text.

I thought it was a simple task. I thought: let’s just use XeLaTeX, which has out-of-the-box Unicode support. In the end, it was a simple task, but only after struggling to solve a particular problem. To show you the problem, I ran the following straightforward code through XeLaTeX…

\documentclass[]{book}

\usepackage{fontspec}

\begin{document}
Latin text. Chinese text: 紫薇北斗星 Modern greek: Διαμ πριμα εσθ ατ, κυο πχιλωσοπηια Ancient greek: Μῆνιν ἄειδε, θεά, Πηληϊάδεω Ἀχιλῆος. And regular latin text.
\end{document}

… and the following PDF was produced:

XeLaTeX rendering Computer Modern font with unsupported unicode characters
XeLaTeX rendering Computer Modern font with unsupported unicode characters

It turns out that the missing unicode characters are not XeLaTeX’s fault. The problem is that the used font (XeLaTeX by default uses a slightly more encompassing Computer Modern font) has not all unicode characters implemented. To implement all unicode characters in a single font (about 1.1 million characters) is a monumental task, and there are only a small handful of fonts whose maintainers aim to have full support of all characters (one of them is GNU FreeFont, which is already part of the Debian distribution, and therefore available to XeLaTeX).

So, I thought, let’s just use a font which is dedicated to unicode. I selected in my document the pretty Junicode font:

\setmainfont{Junicode}

The result was:

XeLaTex and Junicode font with chinese and greek characters
XeLaTex and Junicode font with chinese and greek characters

Now, greek worked, but still no chinese characters. It turned out that even fonts which are dedicated to unicode do not yet have all possible characters implemented. Because it’s a lot of work to produce high-quality fonts with matching styles for millions of possible characters.

So, how do regular web browsers or office applications do it? They use a mechanism called font fallback. When a particular character is not implemented in the chosen main font, another font is silently used which does have this particular character implemented. XeLaTeX can do the same with a package called ucharclasses, and it gives you full control over the fallback font selection process. The ucharclasses documentation gives an example using the \fontspec  font selection. I decided to use the font IPAexMincho which supports chinese characters. So I added to my document:

\usepackage[CJK]{ucharclasses}
\setTransitionsForCJK{\fontspec{IPAexMincho}}{\fontspec{Junicode}}

… but when running XeLaTeX with this addition, ucharclasses somehow entered an endless loop with high CPU usage for the TexLive 2014 distribution (part of Debian). It hung at the line:

(./ucharclass.aux) (/usr/share/texmf/tex/latex/tipa/t3cmr.fd)

Endless googling didn’t bring up any useful hints. Something must have changed in the internals, and the ucharclasses documentation needs updating. In any event, it took me 4 hours to find a fix. The solution was to use a font selection other than \fontspec{} — because it doesn’t seem to be compatible with ucharclasses any more. Instead, I used fontspec‘s suggested \newfontfamily  mechanism. Here is the final working code:

\documentclass[]{book}

\usepackage{fontspec}
\setmainfont{Junicode}
\newfontfamily\myregularfont{Junicode}
\newfontfamily\mychinesefont{IPAexMincho}

\usepackage[CJK]{ucharclasses}
\setTransitionsForCJK{\mychinesefont}{\myregularfont}

\begin{document}
Latin text. Chinese text: 紫薇北斗星  Modern greek: Διαμ πριμα εσθ ατ, κυο πχιλωσοπηια Ancient greek: Μῆνιν ἄειδε, θεά, Πηληϊάδεω Ἀχιλῆος. And regular latin text.
\end{document}

Here is the result: Mixed latin, chinese, and greek scripts with two different fonts: Junicode and IPAexMincho:

XeLaTeX with unicode font fallbacks
XeLaTeX with unicode font fallbacks

Pretty!

XeLaTeX with unicode font fallbacks
XeLaTeX with unicode font fallbacks

How to set up audio streaming (internet radio) in Linux

This tutorial will show you how you can go live with your own internet radio station in a few minutes.

Demystifying “streams”

There is a lot of information, disinformation and irrelevant information about this in the internet. When you listen to internet radio, and you inspect the network requests in your Google Chrome Developer Tools (yes, you should use Chrome anyway), you will discover that a ‘magickal’ stream is nothing else than a blatantly simple HTTP download of a regular file which never finishes. Yup, jawdroppingly simple.

What do you need?

In order to broadcast audio (e.g. internet radio) into the internet, you need

  1. a remote streaming server with high bandwidth to which many clients can collect
  2. a local stream generator, which is sending a single stream to the streaming server

The following tutorial shows how you can easily achieve this with free and open source tools which are part of the Debian (Ubuntu) distributions. It will take you 15 minutes to start your first rudimentary broadcast.

We will use Icecast2 as a streaming server, simply for the reason that it is part of the Debian distribution and that I got it to work immediately. As the local stream generator we will use darkice, for the same reasons.

Why not Windows? Well, since the majority of remote servers are running Linux distributions, you can use Icecast2 anyway. If you want to use a different stream generator for Windows, you can do so. This screencast shows you how it’s done.

Icecast2

Is Icecast a professional-grade solution? According to a blog,

Very much so. ICEcast is an industry standard platform used by thousands and thousands of radio stations all over the world. Its wide compatibility means people can listen with most players and operating systems.

Listeners will be able to connect to your MP3 stream from all over the world, with all the popular media players including Windows Media Player, iTunes, Winamp, Realplayer, XMMS, and many more media players besides.

Although incredibly simple, it can cope with even the heaviest demands and will not break under pressure. Its simplicity works to the broadcaster and listeners favor.

According to Wikipedia,

Version 2 [of Icecast] was started in 2001, a ground-up rewrite aimed at multi-format support (initially targeting Ogg Vorbis) and scalability.

A ground-up rewrite for scalability certainly sounds like good news! So, let’s dive in!

You would do the following steps on a server which is located at a large internet node with enough bandwidth to serve all your audience. To install, simply type

apt-get install icecast2

During the installation you will be asked if you want to configure Icecast2. Answer yes. You will be asked the hostname. Here simply leave the default “localhost”. Next, you will be asked for source, relay and administration passwords. For testing, leave “hackme”. If you want to change the configuration at a later point, edit the configuration file /etc/icecast2/icecast.xml

Next, you have to enable the Icecast2 server by setting ENABLE  in the configuration file /etc/default/icecast2  to true .

Now, start the server by typing

service icecast2 start

You now can access the web admin interface on port 8000 of your machine:

Icecast2 web-based admin interface
Icecast2 web-based admin interface

The log file is in /var/log/icecast2/error.log  and access.log  . Best to tail -f  both files to observe what is going on.

Darkice

Darkice is a stream generator. It encodes audio into various formats (e.g. ogg, mp3, etc.) from various inputs (e.g. microphone jack, line-in jack, or the stereo mix of your operating system) and sends a single stream to our Icecast2 server, which in turns re-broadcasts it to all connected clients.

To install, simply type:

apt-get install darkice

By default it does not install a configuration file. But there is an example one in the documentation. Copy this to the /etc directory:

cp /usr/share/doc/darkice/examples/darkice.cfg /etc

You will need to edit this file according to your needs. Here is an example that worked for me:

# this section describes general aspects of the live streaming session
[general]
duration = 0
bufferSecs = 5
reconnect = yes

[input]
device = default
sampleRate = 44100
bitsPerSample = 16
channel = 2

[icecast2-0]
bitrateMode = abr
format = vorbis
bitrate = 96
server = 192.168.0.250
port = 8000
password = hackme
mountPoint = example1.ogg
name = DarkIce trial
description = This is only a trial
url = http://www.yourserver.com
genre = my own
public = yes
localDumpFile = dump.ogg

Make sure that the password and the IP address of the Icecast2 server (which we installed earlier on the other machine) match. Also, remember the mountPoint of this stream. This is simply a label, in my case it is “example1”. Then you simply run as normal user

darkice

It is a console-only application and you will see some messages. This is what I get:

DarkIce 1.0 live audio streamer, http://code.google.com/p/darkice/
Copyright (c) 2000-2007, Tyrell Hungary, http://tyrell.hu/
Copyright (c) 2008-2010, Akos Maroy and Rafael Diniz
This is free software, and you are welcome to redistribute it 
under the terms of The GNU General Public License version 3 or
any later version.

Using config file: /etc/darkice.cfg
Using ALSA DSP input device: default
Could not set POSIX real-time scheduling, this may cause recording skips.
Try to run darkice as the super-user.

The note about the realtime stuff is just a warning, it works for me nevertheless. It would be easy to run it as superuser.

Making a simple stream player

We will simply make a small website with one <audio> element. That is enough to play streams. Create an empty file called streamtest.html  with the following contents:

<html>
  <body>
    <audio controls>
      <source src="http://192.168.0.250:8000/example1" />
    </audio>
  </body>
</html>

Make sure that the IP address corresponds to the server where the Icecast2 server is running on. Open this html file in a browser and click the play button. Now you should hear the same audio that the darkice client has as its input.

Changing the audio input for darkice

In case you don’t have the Pulse Audio Volume control installed, install it with

apt-get install pavucontrol

Then run it. As soon as you have darkice running, the “Recording” tab will show the text “ALSA plug-in [darkice]: ALSA Capture from …” From the drop down you can select the input source. The text is a bit misleading. In my case “Monitor” means the stereo-mix of the entire computer (e.g. all system sounds, all played back audios). “Built-in Analog Stereo” means the microphone / line-in jack.

Pulse Audio volume control pavucontrol
Pulse Audio volume control pavucontrol

For professional radio applications you of course would not use such a simple software mixer, but have an external hardware-based mixer to which all the microphones and the line-out of your computer are attached. Then you would connect the final output of the hardware mixer to your computer line-in and select “Built-in Analog Stereo” for darkice’s input.

Linux has a more professional audio system called Jack as a replacement for the standard system Pulse Audio (we were using Pulse Audio in the above tutorial, which is similar to what Windows uses). Both are running on the Linux Kernel’s sound system called ALSA.

Conclusion

In the face of tons of documentation and blogs in the internet it is surpirisingly easy to set up your own, simple internet radio station with zero investment, all thanks to the Open Source movement.

 

 

 

Citations within footnotes in LaTeX

Writing a tutorial on programming, I needed citations within footnotes. Luckily, the biblatex package added support (see first comment on the sourceforge page of biblatex) for citations within footnotes in 2011. Apparently, this is not straightforward, since a low-level citation command has to be used to satisfy LaTeX. Anyway, this is now done automatically by thebiblatex  package, so I didn’t have to make any changes.

In my document, I only use the LaTeX command \autocite  which behavior I can define in the document preamble. In my preamble I have:

\usepackage[backend=biber,autocite=footnote,style=authortitle-ibid]{biblatex}
\bibliography{bibliography.en.bib}

Now, with the following TeX code…

\chapter{My chapter}

This is a test\footnote{TeX is awesome \autocite[see][p. 120]{Knuth}. I agree.} for a citation within a footnote. However, this is a citation\autocite[]{Ritchie75} in normal text.

… you get the following output:

LaTeX citation within footnote on HTML and Kindle output
LaTeX citation within footnote on HTML and Kindle output

 

You will notice that footnote 2 is a citation from within the normal text. Since I’ve specified autocite=footnote  in the preamble, this is rendered as a footnote. However, the citation from within the footnote 1 is rendered in-line.