Debian 10: System monitoring using e-mail (Exim as a smarthost)

Recently, IT infrastructure monitoring tools have been springing up like mushrooms after a rain. But let’s take a step back and look at a traditional and very basic way to monitor a system – using e-mail. Yes, you heard right; that internet app invented in the 1960s!

For some events and incidents on a Debian system, the superuser of the system is still, by default, informed via e-mail. For example, mdadm shoots an e-mail to root when there is a software RAID array degradation. But by default, these e-mails are just dumped to a mail spool file, never inspected. This is really bad when your hard disk just died, and when you were never informed that the other one in the RAID array died too just a couple of months ago!

It has been 8 years since I wrote last about how to configure my favorite Mail Transport Agent, Exim. Back then, as an e-mail service provider, I wanted to run Exim as a production MTA; receiving, sending, and relaying e-mail in the internet. It was a daunting task, to say the least.

This time, however, I just want to run Exim locally, behind a firewall. It should receive locally submitted e-mail, and rather than dump the e-mails to a local mail spool file, forward it to a remote e-mail address, where I can read it on my phone. This mode of an MTA is called smarthost.

A smarthost is a local e-mail MTA/server which receives SMTP messages to any e-mail address locally, and forwards them as an authenticated client to the next MTA (that is why it must be “smart” – it must have the right credentials). The submission must be authenticated (submitting username and password), because it is unlikely that your local IP address from your ISP is not in any e-mail blacklist. And internet-facing MTAs will refuse to accept e-mails from such IP addresses.

It was not so straightforward to accomplish this, as always. But it is totally doable, and in a short time, too. This blog post documents the steps for Debian 10. They should continue to work in later Debian versions.

You will need:

  • A working e-mail account at an e-mail provider of your choice (SMTP server FQDN, port number, username and password). It is advisable that you generate a dedicated username and password. Do not use your primary username and password! If the credentials leak or are stolen (they will be stored on the local hard drive, readable only by root), then you can simply disable the affected e-mail account without further impact. Furthermore, for security, we are going to require that the SMTP server offers a port featuring TLS-on-connect, as suggested by RFC-8314 (“cleartext considered obsolete”).
  • Debian 10 (newer should work too, because the fundamentals rarely change)
  • Internet connection
  • 30 minutes of time
  • root access. Do all the following steps as root.

First, make sure that the FQDN of the SMTP server of your e-mail provider is not a DNS alias, otherwise domain matching inside of Exim will not work (Exim often uses reverse-DNS lookups to find actual FQDNs). Check this by entering:

host smtp.example.com

If this prints "smtp.example.com" is an alias for "mail.example.com", then resolve all aliases and choose the final FQDN. Another way to get the final FQDN is by doing a reverse DNS lookup on the server’s IP address, e.g. by running:

dig -x <ip address>

Use the printed FQDN to the right of “PTR”. In our example, this is mail.example.com.

Now, let’s actually start by installing Exim (exim4-daemon-light is enough):

apt install exim4

Next, configure exim4:

dpkg-reconfigure exim4-config

At the prompts, do the following:

  1. Select the option “mail sent by smarthost; no local mail”
  2. For “System mail name” leave the pre-filled hostname or PQDN
  3. “IP-addresses to listen on”: leave leave the default “127.0.0.1;::1”.
  4. “Other destinations for which mail is accepted”: leave the pre-filled hostname or PQDN.
  5. “Visible domain name for local users”: leave the pre-filled hostname or PQDN.
  6. For “IP address or host name of the outgoing smarthost” enter the final FQDN which you have found previously, plus a double colon ::, plus the port number. For example: mail.example.com::465
  7. “Keep number of DNS-queries minimal”: leave the default “No”.
  8. “Split configuration into small files”: leave the default “No”.

Next, generate certificates for Exim by running the command below. These will be used for the TLS connections. To test, you can leave defaults for all the questions which the following command asks you. Later, you could upgrade to more professional certificates:

/usr/share/doc/exim4-base/examples/exim-gencert

Next, define the recipient e-mail address as an alias of the system user root. Add the following line to /etc/aliases:

root: recipient@example.com

The remote SMTP server may reject mail without a proper “From:” address. Usually, the server expects the address in the “From:” field to have the same domain name as the MTA itself. For this reason, add the following line to /etc/email-addresses.

root: sender@example.com

Next, add credentials for the SMTP submission to the file /etc/exim4/passwd.client in the format <FQDN>:<username>:<password> For example:

mail.example.com:username:hackme

The default configuration of Exim shipped in the Debian package is excellent and very flexible. But TLS is not enabled by default, and so we still need to make a few adaptations to the default config file.

Add the following lines to /etc/exim4/exim4.conf.localmacros. Create the file if it doesn’t exist:

# Enable TLS
MAIN_TLS_ENABLE = 1

# Require TLS for all remote hosts (STARTTLS or TLS-on-connect)
REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS = *

# Require TLS-on-connect for RFC-8314
REMOTE_SMTP_SMARTHOST_REQUIRE_PROTOCOL = smtps

Next, make the following adaption to the file /etc/exim4/exim4.conf.template. After .ifdef REMOTE_SMTP_SMARTHOST_HOSTS_REQUIRE_TLS.endif add the following:

.ifdef REMOTE_SMTP_SMARTHOST_REQUIRE_PROTOCOL
  protocol = REMOTE_SMTP_SMARTHOST_REQUIRE_PROTOCOL
.endif

Run update-exim4.conf and systemctl restart exim4. The final configuration file is written to /var/lib/exim4/config.autogenerated.

Testing with an Exim test instance

To test your setup, you can start a test instance of Exim in the local root console, listening on port 26. It will use the same configuration file, but runs in parallel and independently from the already running Exim daemon. Run as root:

exim -bd -d -oX 26

Then, you can use swaks (from the swaks Debian package) to send a test e-mail to the test instance. The output will be very verbose, so you can easily debug:

swaks --from root@localhost --to root@localhost --port 26

This will send an e-mail to the address looked up under the key “root” in the file /etc/aliases (in our case, recipient@example.com). The “From:” header address will be looked up under the key “root” in the file /etc/email-addresses (in our case, sender@example.com).

See if you got the e-mail in the target inbox. If yes, then testing with the production Exim daemon should work too.

Testing with the Exim daemon

Observe the output of …

tail -f /var/log/exim4/mainlog

… and then again send a test e-mail using swaks, but this time to the default port 25:

swaks --from root@localhost --to root@localhost

If you got the e-mail, then congratulations! You will now receive e-mails directed at the local root user. You can easily extend this for other, unprivileged users of the system.

To test the entire chain and make sure that you will always be informed by important events, you could send a test e-mail in periodic intervals. But, this is a topic for a future blog post!

Scripting

You could wrap the above swaks command in a shell script to send e-mail from other scripts. For example:

#!/bin/sh
swaks --from root@localhost --to root@localhost --header "Subject: [$(hostname)] $1" --body "Body: $2"

Warning: You could also use the older tools sendmail or mail to write a similar script, but both directly call exim4 under the hood, which is fine as long as you don’t call such a script from a systemd unit (e.g. a service or a timer). Because seemingly, Exim, when called from the command line to submit e-mail, forks and detaches a short-lived process in order to deliver the e-mail to the target MTA. But systemd kills all sub-processes as soon as the main process exits. The process lives just long enough that the e-mail is put into Exim’s queue, but it is never executed. swaks on the other hand delivers the mail to Exim via SMTP and the delivery process is not subject to be killed by systemd.

You should also rate-limit Exim to protect against DoS attacks, but this is also a topic for a future blog post!

Hardening WordPress against hacking attempts

Note: This post is 6 years old. Some information may no longer be correct or even relevant. Please, keep this in mind while reading.

The WordPress Codex states:

Security in WordPress is taken very seriously

This may be the case, but in reality, you yourself have to take some additional measures so that you won’t have a false sense of security.

With the default settings of WordPress and PHP, the minute you host Wordpress and give access to one single non-security-conscientious administrative user, your entire hosting environment should be considered as compromised.

The general problem with WordPress and PHP is that rather than thinking about which few essential features to turn on (whitelisting), you have to think about dozens of insecure features to turn off (blacklisting).

This excellent article (“Common WordPress Malware Infections”) gives you an overview what you’re up against when it comes to protecting WordPress from Malware.

Below are a couple of suggestions that should be undertaken, starting with the most important ones.

Disable WordPress File Editing

WordPress comes with the PHP file editor enabled by default. One of the most important rules of server security is that you never, ever, allow users to execute arbitrary program code. This is just inviting desaster. All it takes is the admin password to be stolen/sniffed/guessed to allow the WordPress PHP code to be injected with PHP malware. Then, if you haven’t taken other restricting measures in PHP.ini (see section below), PHP may now

  • Read all readable files on your entire server
    • Include /etc/passwd and expose the names of all user accounts publicly
    • Read database passwords from wp-config.php of all other WordPress installations and modify or even delete database records
    • Read source code of other web applications
    • etc.
  • Modify writable files
    • Inject more malware
    • etc.
  • Use PHP’s curl functions to make remote requests
    • Turns your server into part of a botnet

So amongst the first things to do when hosting WordPress, is to disable file editing capabilities:

define('DISALLOW_FILE_EDIT', true);

But that measure assumes that WordPress plus third-party Plugins are secure enough to improve their own security, which one cannot assume, so it is better to…

The “Big Stick”: Remove Write File Permissions

I’ll posit here something that I believe to be self-evident:

It is safer to make WordPress files read-only and thus disallow frequent WordPress (and third-party Plugin) upgrades than it is to allow Wordpress (and third-party Plugins) to self-modify.

Until I learn that this postulate is incorrect, I’ll propose that you make all WordPress files (with the obvious exception of the uploads directory) owned by root, and writable only to root, interpreted by a non-root user. This will leverage the security inherent in the Linux Kernel:

find . -type d -exec chmod 755 {} \;
find . -type f -exec chmod 644 {} \;
chown -R root:root .
chown -R www-data:www-data wp-content/uploads

Note that you still can upgrade WordPress from time to time manually. You could even write a shell script for it.

Restrict serving of files

Disable direct access to wp-config.php which contains very sensitive information which would be revealed should the PHP not be processed correctly. In Nginx:

location = /wp-config.php {
    deny all;
}

Disable PHP execution in the uploads directory. In Nginx:

location ~* /(?:uploads|files)/.*.php$ {
    deny all;
}

Restrict PHP

I’ll refer the reader to already written excellent external articles –  please do implement the suggestions therein:

Hardening PHP from PHP.ini

25 PHP Security Best Practices

Host WordPress in a Virualization Environment

In addition to all of the above, any kind of publicly exposed web application (not just WordPress) should really be hosted in an isolated environment. Docker seems promising for this purpose. I found the following great external tutorial about generating a LAMP Docker image:

https://codeable.io/wordpress-developers-intro-docker/

https://codeable.io/wordpress-developers-intro-to-docker-part-two/

100% HTTPS in the internet? Non-Profit makes it possible!

Note: This post is 7 years old. Some information may no longer be correct or even relevant. Please, keep this in mind while reading.

HTTPS on 100% of websites in the internet? This just has gotten a lot easier! Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG), a Section 501(c)(3) Non-Profit entity dedicated to reduce financial, technological, and education barriers to secure communication over the Internet.

Let’s Encrypt offers free-of-cost certificates that can be used for HTTPS websites, even when these websites are ran for commercial purposes. Unlike traditional CA’s they don’t require cumbersome registration, paperwork, set-up and payment. The certificates are fetched in an automated way through an API (the ACME Protocol — Automatic Certificate Management Environment), which includes steps to prove that you have control over a domain.

Dedicated to transparency, generated certificates are registered and submitted to Certificate Transparency logs. Here is the generous legal Subscriber Agreement.

Automated API? This sounds too complicated! It is actually not. There are a number of API libraries and clients available that do the work for you. One of them is Certbot. It is a regular command-line program written in Python and the source code is available on Github.

After downloading the certbot-auto script (see their documentation), fetching certificates consists of just one command line (in this example certs for 3 domains are fetched in one command with the -d switch):

certbot-auto certonly --webroot -w /var/www/example -d example.com -d www.example.com -d blah.example.com

With the -w  flag you tell the script where to put temporary static files (a sub-folder .well-known  will be created) that, during the API control flow, serve as proof to the CA’s server that you have control over the domain. This is identical to Google’s method of verifying a domain for Google Analytics or Google Webmaster Tools by hosting a static text file.

Eventually, the (already chained, which is nice!) certificate and private key are copied into /etc/letsencrypt/live/example.com/ :

fullchain.pem
privkey.pem

Then it is only a matter of pointing your web server (Nginx, Apache, etc.) to these two files, and that’s trivial.

Let’s Encrypt certificates are valid for 90 days. The automatic renewal of ALL certificates that you have loaded to your machine is as easy as …

./certbot-auto renew

… which they suggest should be put into a Cron job, run twice daily. It will renew the certificates just in time. No longer do you have to set a reminder in your calendar to renew a certificate, and then copy-paste it manually!

A bit of a downside is that Let’s Encrypt unfortunately doesn’t support wildcard domain certificates. For these, you still have to pay money to some other CA’s who support them. But in above shown code example, you would generate only 1 certificate for the domains example.com and its two subdomains www.example.com and blah.example.com. The two subdomains are listed in the Subject Alternative Name field of the certificate, which is as close to wildcard subdomains as it gets. But except for SAAS providers and other specialized businesses, not having wildcard certificates should not be too big of an issue, especially when one can automate the certificate setup.

On the upside, they even made sure that their certificates work down to Windows XP!

Today, I set up 3 sites with Let’s Encrypt (one of them had several subdomains), and it was a matter of a few minutes. It literally took me longer to configure proper redirects in Nginx (no fault of Nginx, I just keep forgetting how it’s done properly) than to fetch all the certificates. And it even gave me time to write this blog post!

Honestly, I never agreed with the fact that for commercial certificate authorities, one has to pay 1000, 100 or even 30 bucks per certificate per year. Where’s the work invested into such a certificate that is worth so much? The generation of a certificate is automated, and is done in a fraction of a second on the CPU. Anyway, that now seems to be a thing of the past.

A big Thumbs-up and Thanks go to the Let’s Encrypt CA, the ISRG, and to Non-Profit enterprises in general! I believe that Non-Profits are the Magic Way of the Future!

Icon made by Freepik from www.flaticon.com