Hardening WordPress against hacking attempts

Note: This post is 6 years old. Some information may no longer be correct or even relevant. Please, keep this in mind while reading.

The WordPress Codex states:

Security in WordPress is taken very seriously

This may be the case, but in reality, you yourself have to take some additional measures so that you won’t have a false sense of security.

With the default settings of WordPress and PHP, the minute you host Wordpress and give access to one single non-security-conscientious administrative user, your entire hosting environment should be considered as compromised.

The general problem with WordPress and PHP is that rather than thinking about which few essential features to turn on (whitelisting), you have to think about dozens of insecure features to turn off (blacklisting).

This excellent article (“Common WordPress Malware Infections”) gives you an overview what you’re up against when it comes to protecting WordPress from Malware.

Below are a couple of suggestions that should be undertaken, starting with the most important ones.

Disable WordPress File Editing

WordPress comes with the PHP file editor enabled by default. One of the most important rules of server security is that you never, ever, allow users to execute arbitrary program code. This is just inviting desaster. All it takes is the admin password to be stolen/sniffed/guessed to allow the WordPress PHP code to be injected with PHP malware. Then, if you haven’t taken other restricting measures in PHP.ini (see section below), PHP may now

  • Read all readable files on your entire server
    • Include /etc/passwd and expose the names of all user accounts publicly
    • Read database passwords from wp-config.php of all other WordPress installations and modify or even delete database records
    • Read source code of other web applications
    • etc.
  • Modify writable files
    • Inject more malware
    • etc.
  • Use PHP’s curl functions to make remote requests
    • Turns your server into part of a botnet

So amongst the first things to do when hosting WordPress, is to disable file editing capabilities:

define('DISALLOW_FILE_EDIT', true);

But that measure assumes that WordPress plus third-party Plugins are secure enough to improve their own security, which one cannot assume, so it is better to…

The “Big Stick”: Remove Write File Permissions

I’ll posit here something that I believe to be self-evident:

It is safer to make WordPress files read-only and thus disallow frequent WordPress (and third-party Plugin) upgrades than it is to allow Wordpress (and third-party Plugins) to self-modify.

Until I learn that this postulate is incorrect, I’ll propose that you make all WordPress files (with the obvious exception of the uploads directory) owned by root, and writable only to root, interpreted by a non-root user. This will leverage the security inherent in the Linux Kernel:

find . -type d -exec chmod 755 {} \;
find . -type f -exec chmod 644 {} \;
chown -R root:root .
chown -R www-data:www-data wp-content/uploads

Note that you still can upgrade WordPress from time to time manually. You could even write a shell script for it.

Restrict serving of files

Disable direct access to wp-config.php which contains very sensitive information which would be revealed should the PHP not be processed correctly. In Nginx:

location = /wp-config.php {
    deny all;
}

Disable PHP execution in the uploads directory. In Nginx:

location ~* /(?:uploads|files)/.*.php$ {
    deny all;
}

Restrict PHP

I’ll refer the reader to already written excellent external articles –  please do implement the suggestions therein:

Hardening PHP from PHP.ini

25 PHP Security Best Practices

Host WordPress in a Virualization Environment

In addition to all of the above, any kind of publicly exposed web application (not just WordPress) should really be hosted in an isolated environment. Docker seems promising for this purpose. I found the following great external tutorial about generating a LAMP Docker image:

https://codeable.io/wordpress-developers-intro-docker/

https://codeable.io/wordpress-developers-intro-to-docker-part-two/

Hashing passwords: SHA-512 can be stronger than bcrypt (by doing more rounds)

Note: This post is 7 years old. Some information may no longer be correct or even relevant. Please, keep this in mind while reading.

On a server, user passwords are usually stored in a cryptographically secure way, by running the plain passwords through a one-way hashing function and storing its output instead. A good hash function is irreversible. Leaving dictionary attacks aside and by using salts, the only way to find the original input/password which generated its hash, is to simply try all possible inputs and compare the outputs with the stored hash (if the hash was stolen). This is called bruteforcing.

Speed Hashing

With the advent of GPU, FPGA and ASIC computing, it is possible to make bruteforcing very fast – this is what Bitcoin mining is all about, and there’s even a computer hardware industry revolving around it. Therefore, it is time to ask yourself if your hashes are strong enough for our modern times.

bcrypt was designed with GPU computing in mind and due to its RAM access requirements doesn’t lend itself well to parallelized implementations. However, we have to assume that computer hardware will continue to become faster and more optimized, therefore we should not rely on the security margin that bcrypt’s more difficult implementation in hardware and GPUs affords us for the time being.

For our real security margin, we should therefore only look at the so-called “variable cost factor” that bcrypt and other hashing functions support. With this feature, hashing function can be made arbitrarily slow and therefore costly, which helps deter brute-force attacks upon the hash.

This article will investigate how you can find out if bcrypt is available to you, and if not, show how to increase the variable cost factor of the more widely available SHA-512 hashing algorithm.

Do you have bcrypt?

If you build your own application (web, desktop, etc.) you can easily find and use bcrypt libraries (e.g. Ruby, Python, C, etc.).

However, some important 3rd party system programs (Exim, Dovecot, PAM, etc.) directly use glibc’s crypt() function to authenticate a user’s password. And glibc’s crypt(), in most Linux distributions except BSD, does NOT implement bcrypt! The reason for this is explained here.

If you are not sure if your Linux distribution’s glibc supports bcrypt (man crypt won’t tell you much), simply try it by running the following C program. The function crypt() takes two arguments:

char *crypt(const char *key, const char *salt);

For testing, we’ll firstly generate a MD5 hash because that is most certainly available on your platform. Add the following to a file called crypttest.c:

#include <stdio.h>
#include <crypt.h>

int main(void) {
 char *hashed;
 hashed = crypt("xyz", "$1$salthere");
 puts(hashed);
 return 0;
}

Suppose xyz is our very short input/password. The $1$ option in the salt argument means: generate a MD5 hash. Type man crypt for an explanation.

Compile the C program:

gcc -o crypttest -lcrypt crypttest.c

Run it:

./crypttest

The output:

$1$salthere$BEMucfLXR/yP5di4BYptX1

Next, in the C program change $1$ to $6$ which means SHA-512 hash. Recompile, run, and it will output:

$6$salthere$X.eZx5ScCKO4mHW1o5dqLB77qtuUtbgVr52N1LGVHWONeoav/p77GEcJ4zytaMk8cesMiF2rKL7weLzp3BUlq1

Next, change $6$ to $2a$ whch means bcrypt. Recompile, run, and the output on my distribution (Debian) is:

Segmentation fault

So, on my Debian system, I’m out of luck. Am I going to change my Linux distribution just for the sake of bcrypt? Not at all. I’ll simply use more rounds of SHA-512 to make hash bruteforcing arbitrarily costly. It turns out that SHA-512 (bcrypt too, for that matter) supports an arbitrary number of rounds.

More rounds of SHA-512

glibc’s crypt() allows specification of the number of rounds of the main loop in the algorithm.  This feature is strangely absent in the man crypt documentation, but is documented here. glibc’s default of rounds for a SHA-512 hash is 5000. You can specify the number of rounds as an option in the salt argument. We’ll start with 100000 rounds.

In the C program, pass in as argument salt the string $6$rounds=100000$salthere . Recompile and measure the execution time:

time ./crypttest
$6$rounds=100000$salthere$F93c6wdBjaMn9c8VY.ZBPJfA2vsk1z.YmEJhm8twyUY/zPbZpdm73hQ2ePb.vTeu0d014pB076S1JW3VCCfhj.

real 0m0.048s

It took 48 milliseconds to generate this hash. Let’s increase the rounds by a factor of 10, to 1 million:

time ./crypttest
$6$rounds=1000000$salthere$9PJ4NaW6zmi/iuQdVtOvFXB3wdtNC7k5GOHRtnEtpJAMdBJ7asunZpCuqfjyx2vZdqRRaS/z33EXI9Z.nkNEb.

real 0m0.455s

We see that generation of one hash took 10 times longer, i.e. about half a second. It seems to be a linear relationship.

Glibc allows setting the cost factor (rounds) from 1000 to 999,999,999.

The Proof lies in the Bruteforcing

Remember that above we have chosen the input/password xyz (length 3). If we only allow for the characters [a-z], we get 26³ possibilities, or 17576 different passwords to try. With 48 ms per hash (100000 rounds) we expect the bruteforcing to have an upper limit of about 17576 x 0.048 s = 843 seconds (approx. 14 minutes).

We will use a hash solver called hashcat to confirm the expected bruteforcing time. Hashcat can use GPU computing, but I’m missing proper OpenCL drivers, so my results come from slower CPU computing only. But that doesn’t matter. The crux of the matter is that the variable cost factor (rounds) directly determines the cracking time, and it will have to be adapted to supposed computer hardware that might do the cracking.

Now we’re going to solve above SHA-512 hash (password/input was xyz) which was calculated with 100000 rounds:

./hashcat64.bin --force -m 1800 -a 3 '$6$rounds=100000$salthere$F93c6wdBjaMn9c8VY.ZBPJfA2vsk1z.YmEJhm8twyUY/zPbZpdm73hQ2ePb.vTeu0d014pB076S1JW3VCCfhj.'

$6$rounds=100000$salthere$F93c6wdBjaMn9c8VY.ZBPJfA2vsk1z.YmEJhm8twyUY/zPbZpdm73hQ2ePb.vTeu0d014pB076S1JW3VCCfhj.:xyz
 
Session.Name...: hashcat
Status.........: Cracked
Input.Mode.....: Mask (?1?2?2) [3]
Hash.Target....: $6$rounds=100000$salthere$F93c6wdBjaMn9c8...
Hash.Type......: sha512crypt, SHA512(Unix)
Time.Started...: Fri Sep 9 21:34:38 2016 (13 mins, 30 secs)
Speed.Dev.#1...: 52 H/s (13.42ms)
Recovered......: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts
Progress.......: 42024/80352 (52.30%)
Rejected.......: 0/42024 (0.00%)
Restore.Point..: 412/1296 (31.79%)

Note that it took about 14 minutes to find the plaintext password xyz from the hash, which confirms above estimation.

Now let’s try to crack a bcrypt hash of the same input xyz which I generated on this website:

./hashcat64.bin --force -m 3200 -a 3 '$2a$08$wOT24w1Ki.or5CACwuAqR.D933N4GcwdV6zsWbmUQWB6UXsgY8KVi'

$2a$08$wOT24w1Ki.or5CACwuAqR.D933N4GcwdV6zsWbmUQWB6UXsgY8KVi:xyz

Session.Name...: hashcat
Status.........: Cracked
Input.Mode.....: Mask (?1?2?2) [3]
Hash.Target....: $2a$08$wOT24w1Ki.or5CACwuAqR.D933N4GcwdV6...
Hash.Type......: bcrypt, Blowfish(OpenBSD)
Time.Started...: Fri Sep 9 13:59:46 2016 (3 mins, 30 secs)
Speed.Dev.#1...: 222 H/s (11.13ms)
Recovered......: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts
Progress.......: 44800/80352 (55.75%)
Rejected.......: 0/44800 (0.00%)
Restore.Point..: 640/1296 (49.38%)

Note that bcrypt’s cost factor in this example is 08 (see bcrypt’s documentation for more info on this). The bruteforce cracking time of the same password took only 3 minutes and 30 seconds. This is about 3 times faster than the SHA-512 example, even though bcrypt is frequently described as being slow. This variable cost factor is often overlooked by bloggers who too quickly argue for one or the other hash function.

Conclusion

bcrypt is suggested in many blog posts in favor of other hashing algorithms. I have shown that by specifying a variable cost factor (rounds) to the SHA-512 algorithm, it is possible to arbitrarily increase the cost of bruteforcing the hash. Both SHA-512 and bcrypt can therefore be not said to be faster or slower than the other.

The variable cost factor (rounds) should be chosen in such a way that even resourceful attackers will not be able to crack passwords in a reasonable time, and that the number of authenticating users to your server won’t consume too much CPU resources.

When a hash is stolen, the salt may be stolen too, because they are usually stored together. Therefore, a salt won’t protect against too short or dictionary-based passwords. The importance of choosing long and random passwords with lower, upper and special symbols can’t be emphasized enough. This is also true when no hashes are stolen, and attackers simply try to authenticate directly with your application by simply trying every single password. With random passwords, and assuming that the hash function implementations are indeed non-reversible, it is trivial to calculate the cost of brute-forcing their hashes.

100% HTTPS in the internet? Non-Profit makes it possible!

Note: This post is 7 years old. Some information may no longer be correct or even relevant. Please, keep this in mind while reading.

HTTPS on 100% of websites in the internet? This just has gotten a lot easier! Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG), a Section 501(c)(3) Non-Profit entity dedicated to reduce financial, technological, and education barriers to secure communication over the Internet.

Let’s Encrypt offers free-of-cost certificates that can be used for HTTPS websites, even when these websites are ran for commercial purposes. Unlike traditional CA’s they don’t require cumbersome registration, paperwork, set-up and payment. The certificates are fetched in an automated way through an API (the ACME Protocol — Automatic Certificate Management Environment), which includes steps to prove that you have control over a domain.

Dedicated to transparency, generated certificates are registered and submitted to Certificate Transparency logs. Here is the generous legal Subscriber Agreement.

Automated API? This sounds too complicated! It is actually not. There are a number of API libraries and clients available that do the work for you. One of them is Certbot. It is a regular command-line program written in Python and the source code is available on Github.

After downloading the certbot-auto script (see their documentation), fetching certificates consists of just one command line (in this example certs for 3 domains are fetched in one command with the -d switch):

certbot-auto certonly --webroot -w /var/www/example -d example.com -d www.example.com -d blah.example.com

With the -w  flag you tell the script where to put temporary static files (a sub-folder .well-known  will be created) that, during the API control flow, serve as proof to the CA’s server that you have control over the domain. This is identical to Google’s method of verifying a domain for Google Analytics or Google Webmaster Tools by hosting a static text file.

Eventually, the (already chained, which is nice!) certificate and private key are copied into /etc/letsencrypt/live/example.com/ :

fullchain.pem
privkey.pem

Then it is only a matter of pointing your web server (Nginx, Apache, etc.) to these two files, and that’s trivial.

Let’s Encrypt certificates are valid for 90 days. The automatic renewal of ALL certificates that you have loaded to your machine is as easy as …

./certbot-auto renew

… which they suggest should be put into a Cron job, run twice daily. It will renew the certificates just in time. No longer do you have to set a reminder in your calendar to renew a certificate, and then copy-paste it manually!

A bit of a downside is that Let’s Encrypt unfortunately doesn’t support wildcard domain certificates. For these, you still have to pay money to some other CA’s who support them. But in above shown code example, you would generate only 1 certificate for the domains example.com and its two subdomains www.example.com and blah.example.com. The two subdomains are listed in the Subject Alternative Name field of the certificate, which is as close to wildcard subdomains as it gets. But except for SAAS providers and other specialized businesses, not having wildcard certificates should not be too big of an issue, especially when one can automate the certificate setup.

On the upside, they even made sure that their certificates work down to Windows XP!

Today, I set up 3 sites with Let’s Encrypt (one of them had several subdomains), and it was a matter of a few minutes. It literally took me longer to configure proper redirects in Nginx (no fault of Nginx, I just keep forgetting how it’s done properly) than to fetch all the certificates. And it even gave me time to write this blog post!

Honestly, I never agreed with the fact that for commercial certificate authorities, one has to pay 1000, 100 or even 30 bucks per certificate per year. Where’s the work invested into such a certificate that is worth so much? The generation of a certificate is automated, and is done in a fraction of a second on the CPU. Anyway, that now seems to be a thing of the past.

A big Thumbs-up and Thanks go to the Let’s Encrypt CA, the ISRG, and to Non-Profit enterprises in general! I believe that Non-Profits are the Magic Way of the Future!

Icon made by Freepik from www.flaticon.com