Public Key Pinning

Sidenote: I’ve been working on converting my blog from WordPress to Jekyll, which is why I’ve been slow to write new posts.

I’ve added public key pinning, or HPKP for short, to my site. I initially wanted to wait until my new blog launched, but I was rather anxious to play with it, so I pulled the trigger anyway.

So what is public key pinning, anyway? In short, it’s an HTTP header that instructs user agents (browsers) the exact public keys it should be using with a particular domain, and remembers those public keys which are specified for a period of time. The purpose of this is that if an active attacker were to forge an X509 certificate, even one that was issued by a legitimate certificate authority, the forged one would be rejected since it was not previously pinned.

The HPKP header (Public-Key-Pins is the name of the HTTP header) looks like this (line breaks added for readability):

'pin-sha256="7qVfhXJFRlcy/9VpKFxHBuFzvQZSqajgfRwvsdx1oG8=";
pin-sha256="/sMEqQowto9yX5BozHLPdnciJkhDiL5+Ug0uil3DkUM=";
max-age=5184000;'

There are a few things going on here. Each “pin-sha256″ is simply a SHA256 digest of the public key, base64 encoded. The SHA256 digest can be calculated from the full private key like so:

openssl rsa -in mykey.key -outform der -pubout | openssl dgst -sha256 -binary | base64

The digest for the current certificate on my site is /sMEqQowto9yX5BozHLPdnciJkhDiL5+Ug0uil3DkUM=. There is also a specified “max-age” value, which tells the browser how long it should retain the “memory” of the pinned key, in seconds. Currently, for this site it is set to two months. Browsers also support the SHA1 digest to pin a key, which would then mean you specify it as “pin-sha1″ if you are using a SHA1 digest.

HPKP is a “trust on first use” security feature, meaning that the browser has no way to validate that what is set in the headers is actually correct the first time it encounters the pinned keys. When the user agent sees the site for the first time, it pins those keys. Every time the user agent connects to the server again, it re-evaluates the HPKP header. This lets you add new public keys, or remove expired / revoked ones. It also allows you to set the max-age to zero, which means the user agent should remove the pinned keys. Note that a user agent will only pin the keys if the HTTPS certificate is “valid”. Like HSTS, if the certificate is not trusted, the public key will not be pinned.

There is a potential issue though if you only pin one key: replacing a pinned key can potentially lock someone out of the site for a very long time. Let’s say that the public key is pinned for 2 months, and someone visits the site, thus the user agent records the pinned keys. One month later, you need to replace the certificate because the certificate was lost or compromised, and you update the Public-Key-Pins header accordingly. However, the site will not load for that person. As soon as the TLS session is established, the browser notes that the new certificate does not match what as pinned, and immediately aborts the connection. It can’t evaluate the new header because it treated the TLS session as invalid, and never even made an HTTP request to the server. That person will not be able to load the site for another month.

This is why HPKP requires a “backup” key, which is why I have two pinned keys. A backup key is an offline key that is not used in production, so that if the current one does need to be replaced; you can use the back up and create a new certificate with that one. This will allow user agents to continue to load the site, and update the HPKP values accordingly. You would then remove the revoked certificate and add another backup to the header. A backup key is so important that user agents mandate it. You cannot pin a “single” public key. There must be a second that is assumed to be a backup. If the backup actually matches any certificate in the TLS session’s certificate chain, the user agent ignores it and assumes it cannot possibly be a backup since it is in production.

I used OpenSSL to generate a new public / private key pair:

openssl genrsa -out backupkey.key 2048
openssl rsa -in backupkey.key -outform der -pubout | openssl dgst -sha256 -binary | base64

I can then use that backup key to create a new CSR should my current certificate need to be replaced. Using Chrome’s chrome://net-internals#hsts page, I can verify that Chrome is indeed pinning my public keys.

Hpkp

Dynamic public key pinning is relatively new, only Chrome 38+ and Firefox 35+ supports it. It also presents much more risk that Strict-Transport-Security since loss of operating keys makes the site unloadable. However I do expect that this will become a good option for site operators that must strictly ensure their sites operate safely.

Impressions of Soylent

A little over a week ago, I got my first shipment of Soylent after patiently waiting for 5 months. I decided that I was going to try eating it every day for lunch, only to see if I could figure out what all of the hype was. For those that don’t know what Soylent is, Soylent is a powdered “food” that you mix with water and an oil additive, which can supposedly replace meals, or even your entire diet. Indeed, there are people that claim they are living entirely off of Soylent.

soylent

My soylent and pitcher arrived in simple boxes.

The idea is compelling for a couple of reasons. Firstly, if it lives up to its claim, it is cost effective. It works out to just over $4 a meal for me. If I had reoccurring shipments, it works out $3.33 a meal. Packing lunch every evening is a bit of a hassle, and even when I do I’m always in a rush in the morning and leave it in the fridge. Lunch in Washington DC can get expensive. A simple sandwich will run you about $6. Over a working week, that adds up quite a bit. Halving those costs seemed appealing, and I wouldn’t have to do any preparation, I can just leave a bag of Soylent at work. Secondly, it supposedly provides balanced nutrition. It’s tempting to eat junk food for lunch and convince yourself it isn’t that bad.

Preparing it is dead simple. You mix water and Soylent together in a 2:1 ratio by volume, and add the oil mixture to it. The powder itself provides most of the nutrition and mass, the oil adds Omega-3. The oil is based on algal, or algae, which means Soylent happens to be vegan, too.

Soylent Oil

The bottle of Oil Blend with a bag of Soylent. There were several bags in the shipment. The oil is pretty much tasteless.

Soylent provides a pitcher for making large quantities of Soylent at once, which isn’t exactly what I wanted. The life of “prepared” Soylent is about 2 days when refrigerated. I wanted to keep it in its powdered form as long as I can, and I don’t want to occupy too much space in the work fridge. Soylent instructs you to mix it together and shake and stir it a bit, but I found the texture to be very grainy and clumpy at first. I then opted to get a Jaxx shaker for a few dollars. Preparing the Soylent in one of these fixes the clumpy-ness and it comes out pretty smooth. For taste, I think it is quite pleasant. I was rather surprised, it has a vanilla and oats taste. Since most of the mass and carbohydrates of Soylent is powdered oats, it makes sense. It wasn’t too sweet or undersweet.

It took me a day or two to figure out how to get “full” from it. Soylent is pretty much a liquid, and I never felt satiated by it at first. The hunger just eventually evaporated after 30 minutes to an hour of eating. One thing that did help with that is splitting the meal. I’ll eat one half, wait and hour or so, then eat the second half. That helps me feel fuller, and it keeps me feeling full for longer.

Some have heard that Soylent makes some people rather gassy. I myself haven’t experienced this, however I’m not doing a full diet of Soylent. Soylent also recommends a few over the counter enzymes to help with digestion if it is a problem, but I haven’t had the need.

I decided to sign up for a reoccurring subscription since this seems to be working for me. I still continue to enjoy food, but I’m rather happy with Soylent for lunches. I don’t know if I would recommend it if you think it is weird — it is weird. But if the benefits seem appealing to you and it’s something you want to try, go for it.

Review: Bulletproof SSL and TLS by Ivan Ristic

I’m not one to write book reviews very often, in fact that you will see this is my first one on my blog. However one book has caught my attention, and that is “Bulletproof SSL and TLS” by Ivan Ristic. I bought this book with my own money, and liked the book enough to write a review of it. After giving this book a thorough read, and some rereading (I’ll explain later) I am left with very good impressions.

This book can be appealing or discouraging in some ways, depending on what you want. I had a hard time figuring out who the right audience for this book is because the assumed knowledge varies greatly from chapter to chapter. Part of this book reads like a clear instruction manual on how to do SSL and TLS right. The other part of this book is focused a bit more on the guts of TLS and its history. The latter topic requires a bit of background in general security and basic concepts in cryptography. While Ristic does a good job trying to explain some cryptographic primitives, I could see certain parts of this book difficult to understand for those not versed in those subjects. I think this is especially noticeable in Chapter 7 on Protocol Attacks.

Other chapters, like 13-16 are clear, well written guides on how best to configure a web server for TLS. These chapters are especially good because it helps make informed decisions about what you are actually doing, and why you may, or may not, want to do certain configurations. Too often I see articles written online that are blindly followed, and people aren’t making decisions based on their needs. He does a good job explaining many of these things, such as what protocols you want to support (and why), what cipher suites you should support (and why), and other subjects. This is in contrast to websites with very ridged instructions on protocol and cipher suite selection that may not be optimal for the reader, which just end up getting copied and pasted by the reader. This is a much more refreshing take on these subjects.

However I would read the book cover-to-cover if you are interested in these subjects. Some things might not be extremely clear, but it’s enough to get a big picture view of what is going on in the SSL / TLS landscape.

Another aspect of this book that I really enjoyed was how up-to-date it was. I opted to get a digital PDF copy of the book during the POODLE promotion week. It’s very surprising to be reading about a vulnerably that occurred in October 2014 in October 2014. That’s practically unheard of with most books and publishers, and this book really stands out because of it. This is why I ended up rereading parts of the book – it has very up-to-date material.

While I am reluctant to consider myself an expert in anything, I did my best to configure my own server’s TLS before reading this book (enough to be happy with the protocols, cipher suites, and certificate chain), but by the time I finished this book I had made a few changes to my own server’s configuration, such as fine-tuning session cache.

My criticisms are weak ones – this is a very good book. Any person that deals with SSL and TLS on any level should probably read this, or even those that are just curious.

First impressions of C# 6

A few days ago I decided to refresh my installation of Visual Studio “14” for a 2015 CTP. Until now I have done some basic tinkering with C# 6, but this time I decided to branch a moderately sized NuGet library I work on, and clean it up with new C# 6 syntax.

One thing that really stands out with C# 6 isn’t the language changes itself, but the process from which those changes are made. There have been quite a few enhancements in C# 6 that didn’t make the final cut, and people were actually able to use them and try them out. It’s unfortunate that not all things made it through, but Microsoft’s Mads Torgersen and team have been very transparent about why a feature didn’t make it to the end. I’m very impressed that Microsoft is able to do these things as openly as they are, which is in stark contract to how they did things only a few years ago.

C# 6 is all syntactic sugar, with a few exceptions. The one feature I found myself liking the most is expression bodies. Consider a C# snippet like this:

public override bool CanRead {
	get {
		return true;
	}
}

That’s pretty verbose syntax for “return true” for a property. Sure I could collapse the curly braces onto all the same lines, but that isn’t really solving the problem. Expression bodies to the rescue:

public override bool CanRead => true;

Much better! I can write this now as one line without feeling like I am cheating. Expression bodies aren’t limited to just properties, either. They can be used for methods, of course:

public override int GetHashCode() => _thing.GetHashCode();

I started getting really into this, but then I ran into a few hiccups. First off, I will back up and say the library that most of this was aimed at is heavy in platform invoke and native memory, which is not very typical of most projects. My first stumble was that you cannot use an expression body in a finalizer. Here is what I was trying to accomplish.

~Frob() => Dispose(false)

I typically follow the dispose pattern so almost all of my finalizers call Dispose with false as the disposing parameter.

This just won’t compile. I initially thought it was just a limitation of the preview and it’d get fixed. I dug deeper though, and it appears it is a design choice:

Finalizers: No

Finalizers are side effecting, not value returning.

The expression is limited in what it can perform, too. The expression of an expression body member cannot throw an exception.

String interpolation looks nice, too. It’s something I couldn’t find a lot of places to use, but combining it with expression bodies, I was able to do some nice cleanup. I was able to go from this:

public override string ToString()
{
	return Name + " (" + Type + ")";
}

To this:

public override string ToString() => "\{Name} (\{Type})";

I understand that the syntax of string interpolation is set to change, but the basic concepts are all the same still.

I’ve never been a big fan of properties with private setters. In many code bases, you will see something like this:

public class Frob
{
	public string Bar { get; private set; }

	public Frob(string bar)
	{
		Bar = bar;
	}
}

Where the intention is the property is only settable from the constructor. I don’t typically like the private setter for that because it conveys that the setter can, and should be, used from the rest of the class. In that scenario I would still use a private readonly field so that I can describe the intent that it should only ever be set in the constructor.

In C# 6, there is a great way to do this. I can omit the setter all together and still set it from a constructor, but only the constructor.

I went from this:

public class Frob
{
	private readonly string _bar;

	public string Bar
	{
		get
		{
			return _bar;
		}
	}

	public Frob(string bar)
	{
		_bar = bar;
	}
}

To this:

public class Frob
{
	public string Bar { get; }

	public Frob(string bar)
	{
		Bar = bar;
	}
}

This is yet another really nice win on cutting down boilerplate code. You can also assign the property in the property declaration itself:

public override bool CanRead { get; } = true;

VB.NET developers have actually had this for a while, so it should be familiar to them.

One that I was very interested in, but didn’t get much chance to use, is the null propagation operator or “null safety operator”. Code like this:

public User GetUser()
{
	if (_context = null) return null;
	return _context.User;
}

I can now simplify that to this:

public User GetUser()
{
	return _context?.User;
}	

The idea is that if the operand on the left side of the dot operator is null, then the compiler will just return null for the right side of it instead of evaluating it. I didn’t get much of a chance to use this, but it did come in handy in a few places:

public override int GetHashCode()
{
	if (KeyFormatValue == null) return 0;
	return KeyFormatValue.GetHashCode();
}

I condensed into:

public override int GetHashCode() => KeyFormatValue?.GetHashCode() ?? 0;

The only feature in C# 6 that I am a little dubious about is “module” syntax. I think that will just take me some time to figure out how to use it right without abusing the feature. In a similar vein, the parameterless constructors for structs I couldn’t find any use for. This library has a lot of structs, but they are all used for marshaling.

Putting all of these things together I was able to achieve a lot. I managed to cut out about 450 lines of code just doing a single pass of using new syntax. More importantly though, the syntax is improvement over what we have today. I’m sure if I really wanted to I could have cut out 450 lines by doing some foolery, but having a more succinct syntax to achieve that is the real winner.

Content-Security-Policy Nonces in ASP.NET and OWIN, Take 2

I last wrote about using nonces in content security policies with ASP.NET and OWIN. I’ve learned a few things since that should help a little bit.

First, in my previous example, I used a bit of a shotgun approach by applying the CSP header in OWIN’s middleware. This worked effectively, but it had one downside: it added the CSP header to everything, including non-markup content like .JPGs and .PNGs. While having the CSP header for these doesn’t hurt anything, it does add at minimum 28 bytes every time the content is served.

CSP in static content

Since we are using MVC, it makes sense to move this functionality into an ActionFilter and registering it as a global filter. Here is the action filter:

public sealed class NonceFilter : IActionFilter
{
    public void OnActionExecuting(ActionExecutingContext filterContext)
    {
        var context = filterContext.HttpContext.GetOwinContext();
        var rng = new RNGCryptoServiceProvider();
        var nonceBytes = new byte[32];
        rng.GetBytes(nonceBytes);
        var nonce = Convert.ToBase64String(nonceBytes);
        context.Set("ScriptNonce", nonce);
        context.Response.Headers.Add("Content-Security-Policy", 
            new[] { string.Format("script-src 'self' 'nonce-{0}'", nonce) });
    }

    public void OnActionExecuted(ActionExecutedContext filterContext)
    {
    }
}

Then we can remove our middleware from OWIN. Finally, we add it to our global filters list:

public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
    filters.Add(new NonceFilter());
    /* others omitted */
}

The NonceHelper used for rendering the nonce in script elements doesn’t need to change.

This adds the Content-Security-Policy header to MVC responses, but not static content like CSS or JPG files. This also has the added benefit of working in projects that don’t use OWIN at all.

This does put more burden on putting Content-Security-Policy in other places though, such as static HTML files, or any other places where the browser is interpreting markup.