Sometimes programmers hesitate to make their software open source because they think that revelation of the source code would allow attackers to ‘hack it’.
Certainly there are specific cases where this is true, for example when a software was designed with “security through obscurity”, but not as a general rule.
In my opinion, if inspection of the source code allows an attacker to ‘hack it’, then the programmer has done it wrong. Security primarily comes from writing secure algorithms, independent of their open source nature. This is also called Kerckhoffs’s principle.
OpenSSL is a case in point: It is open source, but nevertheless it powers HTTPS all over the internet. “But,” you say, “it is only secure because its code is kinda obscure.” Well, no: Cryptographically secure algorithms exhibit very astonishing properties. For example, the One-time pad encryption technique is extremely simple and exhibits “Perfect secrecy” which is defined by Wikipedia as follows:
One-time pads are “information-theoretically secure” in that the encrypted message (i.e., the ciphertext) provides no information about the original message to a cryptanalyst (except the maximum possible length of the message). This is a very strong notion of security first developed during WWII by Claude Shannon and proved, mathematically, to be true for the one-time pad by Shannon about the same time. His result was published in the Bell Labs Technical Journal in 1949. Properly used, one-time pads are secure in this sense even against adversaries with infinite computational power.
Claude Shannon proved, using information theory considerations, that the one-time pad has a property he termed perfect secrecy; that is, the ciphertext C gives absolutely no additional information about the plaintext. This is because, given a truly random key which is used only once, a ciphertext can be translated into any plaintext of the same length, and all are equally likely.
Take for example the following simple implementation of the One-time pad (via XOR) in Ruby (which took me just a couple of minutes to write):
def otp(msg, key) result =  msgraw = msg.bytes keyraw = key.bytes msgraw.length.times do |n| result << (msgraw[n] ^ keyraw[n]) end return result.pack("c*") end cipher = otp("Hello!", "my key") # => "%\x1CL\a\nX" # Then somewhere else: msg = otp(cipher, "my key") # => "Hello!"
This code is open source, but it nevertheless exhibits the property of perfect (i.e. 100%) security “even against adversaries with infinite computational power” — given that the key is never submitted over insecure channels.
Sure, the One-time pad is not practical, and one could probably exploit weaknesses in Ruby or the underlying operating system. But that is not the point. The point is that, given proper implementation of software, it can be made open source without compromising its security.
To contrast this with an example of (bad) source code which should not be made public because it only creates a false sense of security:
def obscure(msg) result =  msgraw = msg.bytes msgraw.length.times do |n| result << ((msgraw[n] + 7) ^ 99) end return result.pack("c*") end cipher = obscure("Hello!")
((msgraw[n] + 7) ^ 99) is equivalent to a hard-coded secret. Sure, the obscured message, when transmitted over a public network, may look random. But the algorithm could easily be reverse-engineered by cryptoanalysis. Also, if the source code were revealed, it would be trivial to decode past and future messages.
“Open Source” does not imply “insecure”. Security comes from secure — not secret — algorithms (which of course includes the freedom of bugs). What counts as “secure” is defined mathematically, and “mathematics (and in extension, physics) can’t be bribed.” It is not easy to come up with such algorithms, but it is possible, and there are many successful examples.
Needless to say, not every little piece of code should be made open source — ideally programmers will only publish generally useful and readable software which they intend to maintain, but that is a subject for another blog post.