Regaining Access to OS X after a lost Yubikey

The Yubikey by Yubico has an interesting use beyond just OTP. It can do a myriad of things, including storing certificates, OATH, and, more interestingly, HMAC-SHA1 challenge response. The last of which is interesting because it can be used with a PAM module.

OS X supports PAM modules, and one of Yubico’s touted features is that you can install a PAM module on OS X, and you now have two factor authentication into your OS X account. In addition to the password, the Yubikey must also be plugged in.

I set that up a while ago and it had been working fine, but I ran into a situation where I needed to turn it off, temporarily, because I couldn’t actually log in. Say, because I didn’t have my Yubikey with me.

Turns out this is really trivial. Just boot the Mac into recovery mode by holding Command+R during boot. This let me edit the /etc/pam.d/authorization file and comment out the Yubico PAM module. Once saved, a quick reboot command later, I was back into my account, two factor turned off. The only thing to note is that you want to edit the one on your Macintosh HD volume under /Volumes, not the authorization file that the recovery partition uses.

This made my life easier, but it also led me to believe the Yubikey PAM module on local OS X accounts had diminished value (the story is different for remote authentication). If I can just turn it off with very little effort, no authentication required, that’s worrying.

There is a way to partially fix it – which is FileVault2. When you boot into the Recovery console with FileVault2 enabled, you cannot edit /etc/pam.d/authorization without knowing the password to the volume since it is encrypted with your password. This however, still reduces authorization to a single factor. If I have your password and no Yubikey, even with FileVault2 enabled I can get in to the account since I have physical access.

This takes a few seconds of extra work. First, you need the UUID of the volume that you need to decrypt (like “Macintosh HD”).

diskutil coreStorage list

and grab the UUID of the logical volume. From there, it’s just one more command:

diskutil coreStorage unlockVolume <UUID> -stdinpassphrase

Enter your password, and then the volume will be mounted in /Volumes/.

In an ideal world, the Yubikey would play a role in unlocking the FileVault2 volume. This is easy enough to do with BitLocker and certificates since the Yubikey can act like a PIV card. However I find this not possible with FileVault2. Even in the case of BitLocker, it’s difficult to accomplish this without the help of being on an Active Directory Domain Joined machine and using an Active Directory account.

My advice would be, take the value that the Yubikey PAM module gives with a grain of salt for local account protection. At least on OS X (I have yet to bother trying on Windows) it’s quite easy to turn it off just by having access to the physical machine.

A lot of people will be quick to point out, “If you have physical access to the hardware, then it’s game over” however that doesn’t quite mean physical security should just be completely ignored. Each little improvement has value.


I recently made the claim that you should not use IIS to terminate HTTPS and instead recommend using a reverse proxy like HAProxy or NGINX (ARR does not count since it uses IIS).

I thought I should add a little more substance to that claim, and why I would recommend decoupling HTTPS from IIS.

IIS itself does not terminate SSL or TLS. This happens somewhere else in windows, notably http.sys and is handled by a component of Windows called SChannel. IIS’s ability to terminate HTTPS is governed by what SChannel can, and cannot, do.

The TLS landscape has been moving very quickly lately, and we’re finding more and more problems with it. As these problems arise, we need to react to them quickly. Our ability to react is limited by the options that we have, and what TLS can do for us.

SChannel limits our ability to react in three major ways.

The first being that the best available cipher suites to us today are still not good enough on Windows (as of Windows Server 2012 R2). The two big omissions are TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 and TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384. These two cipher suites are usually recommended as being one of the first cipher suites that you offer. Oddly, there are some variants that are fairly close to this cipher suite. The alternatives are TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 or TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (and their AES256 / SHA384 variants). Both have their own issues. The former uses a slower, larger ephemeral key, and the latter uses CBC instead of an AEAD cipher like GCM. Stranger, ECDHE and AES-GCM can co-exist, but only if you use an ECDSA certificate, so the cipher suite TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 does work.

This is a bit frustrating. Microsoft clearly has all of the pieces to make TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 a real thing. It can do ECDHE, it can do RSA, and it can do AES-GCM. Why they didn’t put those pieces together to make a highly desirable cipher suite, I don’t know.

The second issue is even though that cipher suites may be lacking, I’m sure people at Microsoft know about it, yet there hasn’t been an update to support it despite the cipher suite being in wide adoption for quite some time now. SChannel just doesn’t get regular updates for new things. Most things like new versions of TLS have been limited to newer versions of Windows. Even when there was an update in May 2015 to add new cipher suites, ECDHE+RSA+AESGCM wasn’t on the list. The KB for the update contains the details.

The final issue is even if SChannel does have all of the components you want, configuring it is annoying at best, and impossible at worst. SChannel handles all TLS on Windows, and SChannel is what is configured. If say, you wanted to disable TLS 1.0 in IIS, you would configure SChannel to do so. However by doing that, you are also configuring any other component on Windows that relies on SChannel, such as Remote Desktop, SQL Server, Exchange, etc. You cannot configure IIS independently. You cannot turn off TLS 1.0 if you have SQL 2008 R2 running and you want to use TLS to SQL server for TCP connections. SQL Server 2012 and 2014 require updates to add TLS 1.2. Even then, I don’t consider it desirable that IIS just cannot be configured by itself for what it supports in regard to TLS.

Those are my arguments against terminating HTTPS with IIS. I would instead recommend using NGINX, HAProxy, Squid, etc. to terminate HTTPS. All of these receive updates to their TLS stack. Given that most of them are open source, you can also readily re-compile them with new versions of OpenSSL to add new features, such as CHACHA20+POLY1305.

Using Chocolatey with One Get and HTTPS

Having rebuilt my Windows 10 environment again, it was time to start installing stuff I needed to use. I thought I should start making this scriptable, and I know Windows 10 has this fancy new package manager called One Get, so I thought I would give it a try.

I found a blog post from Scott Hanselman on setting up One Get with Chocolatey. Having set up Chocolatey, I ran Get-PackageSource to check that it was there, and this was the output:

Name          ProviderName     IsTrusted  IsRegistered IsValidated  Location
----          ------------     ---------  ------------ -----------  --------
PSGallery     PowerShellGet    False      True         False
chocolatey    Chocolatey       False      True         True

All seemed OK, but I noticed that the Chocolatey feed location was not HTTPS. This was obviously a bit concerning. I fired up Fiddler to check if it was actually doing HTTP queries, and yes, it was.

Chocolatey crush

After checking out Chocolatey, it does appear that it supported HTTPS. After doing a bit of tinkering, I found the proper cmdlets to update the location.

Set-PackageSource -Name chocolatey -NewLocation -Force

After that, I re-ran my query, and queries were done over HTTPS now.

Chocolatey https crush

Experimenting with WebP

A few years ago, Google put out the WebP image format. I won’t dive in to the merits of WebP, Google does a good job of that.

For now, I wanted to focus on how I could support it for my website. The thinking that if I am happy with the results here then I can use it in other more useful ways. The trick with WebP is it isn’t supported by all browsers, so a flat “convert all images to WebP” approach wasn’t going to work.

Enter the Accept request header. When a browser makes a request, it includes this header to indicate to the server what the browser is capable of handling, and the preference for the content. Chrome’s Accept header currently looks like this:


Chrome explicitly indicates that it is willing to process WebP. We can use this to conditionally rewrite what file is returned by the server.

The plan was to process all image uploads and append “.webp” to the file. So, foo.png becomes foo.png.webp. We’ll see why in a bit. The other constraint is I don’t want to do this for all images. Images that are part of WordPress itself such as themes will be left alone, for now.

Processing the images was pretty straightforward. I installed the webp package then processed all of the images in my upload directory. For now we’ll focus on just PNG files, but adapting this to JPEGs is easy.

find . -name '*.png' | (while read file; do cwebp -lossless $file -o $file.webp; done)

Note: This is a bit of a tacky way to do this. I’m aware there are probably issues with this script if the path contains a space, but that is something I didn’t have to worry about.

This converts existing images, and using some WordPress magic I configured it to run cwebp when new image assets are uploaded.

Now that we have side-by-side WebP images, I configured NGINX to conditionally serve the WebP image if the browser supports it.

map $http_accept $webpext {
    default         "";
    "~*image/webp"  ".webp";

This goes in the server section of NGINX configuration. It defines a new variable called $webpext by examining the $http_accept variable, which NGINX sets from the request header. If the $http_accept variable contains “image/webp”, then the $webpext variable will be set to .webp, otherwise it is an empty string.

Later in the NGINX configuration, I added this:

location ~* \.(?:png|jpg|jpeg)$ {
	add_header Vary Accept;
	try_files $uri$webpext $uri =404;
    #rest omitted for brevity

NGINX’s try_files is clever. For PNG, JPG, and JPEG files, we try and find a file that is the URI plus the webpext variable. The webpext variable is empty if the browser doesn’t support it, otherwise it’s .webp. If the file doesn’t exist, it moves on to the original. Lastly, it returns a 404 if neither of those worked. NGINX will automatically handle the content type for you.

If you are using a CDN like CloudFront, you’ll want to configure it to vary the cache based on the Accept header, otherwise it will serve WebP images to browsers that don’t support it if the CDN’s cache is primed by a browser that does support WebP.

So far, I’m pleased with the WebP results in lossless compression. The images are smaller in a non-trivial way. I ran all the images though pngcrush -brute and cwebp -lossless and compared the results. The average difference between the crushed PNG and WebP is 15,872.77 bytes (WebP being smaller). The maximum is 820,462. The maximum was 164,335 bytes, and the least was 1,363 bytes. Even the smallest difference was a whole kilobyte. That doesn’t seem like much, but its a huge difference if you are trying to maximize the use of every byte of bandwidth. Since non of the values were negative, WebP outperformed pngcrush on all 79 images.

These figures are by no means conclusive, it’s a very small sample of data, but it’s very encouraging.

Sites Changes

Eventually I’ll blog about them in detail, but I’ve made a few changes to my site.

First, I turned on support for HTTP/2. Secondly, I added support for the CHACHA20_POLY1305 cipher suite. Third, if your browser supports it, images will be served in the WebP format. Currently the only browser that does is Chrome.

My blog tends to be a vetting process for adopting things. If all of these things go well, then I can start recommending them in non-trivial projects.