• Yubikey 4C Review

    Incase you weren’t aware, Yubico launched a USB-C version of their popular Yubikey device. On launch day, the 13th, I paid for 3 of them and eagerly awaited their arrival.

    I’ve recently just finished up a laptop refresh at the house, which means “MacBook <something>”. For light use and travel, I have a MacBook with its singular USB-C port. For heavier things, I have a MacBook Pro with Thunderbolt 3, which gives me 4 USB-C ports. I have no laptops with USB-A connections anymore.

    If you have all or mostly USB-C in your life, then the 4C is a great companion and works just as well in its previous form.

    Comparing

    The 4C can go on a key ring, just like the 4 could. Their sizes are noticeably different though. The 4C is smaller in width and height, at the expense of it being thicker.

    4C Top View 4C Side View

    I find the thickness just slightly troublesome when it’s attached to a key ring. The previous one left just enough space for the key ring to jut out from. With the additional thickness, I now have to prop my laptop up, put it on a stand, or find a new solution for the key ring. However the smaller size is a welcome change since it’s permanently affixed to my key chain.

    Functionality

    It’s identical to the original 4. It’s worth noting however that you can’t clone one Yubikey to another, so you may have to use both for a while during a transition phase. This includes the actual Yubico OTP functionality, and any additional configuration you have have loaded in to the second slot, PIV certificates, etc. I opted to re-key my PIV certificate and replace it.

    I did have a lot of trouble with the Yubikey Personalization Tool. On one Mac it works fine, on another it does not. On Windows it always seems to work. This wasn’t unique to the Yubikey 4C, either.

    USB-C

    If you are in a pure USB-C environment, or mostly so, then this is a great upgrade. No little adapters to lose. If however you have a mix of USB-C and USB-A, you might want to stick with USB-A for a while. There are plenty of adapters that allow you go to from USB-A to USB-C, but the reverse doesn’t exist, and that’s intentional. Since USB-C can do power delivery, plugging a USB-C device in to a USB-A port might damage the USB-A port, so the USB-IF does not allow such things to get certified.

    4 with Adapter

  • The Pains of Deploying HTTPS Certificates

    There’s been some discussion recently about how long an x509 certificate should be valid for if they were issued by a Certificate Authority that is a member of the CA/B Forum.

    Currently, the limit is 39 months, or three and a quarter years. This means that operationally, a certificate from a CA must be changed at least every 39 months. The discussion proposed shortening that length to 13 months.

    Why Shorten It?

    While Let’s Encrypt is lauded for being free, the most impressive aspect of it is that it can be - and is easy - to fully automate. Let’s Encrypt makes CertBot, a piece of software you install on your server that sets up HTTPS for various web servers, and handles renewals, domain validation, etc. Since this is fully automated, the validity period of a certificate is inconsequential - the certificate could be valid for a single day as long as it keeps getting renewed and replaced correctly.

    This has a lot of positives. A short lifespan of a certificate means revocation is less of a concern. Revocation in PKI largely doesn’t work in HTTPS simply because that in most* cases, online revocation checking isn’t performed. We have tools coming soon that will help fix that like Must Staple, but those are still a ways off from being widely deployed and adopted. If a certificate is only valid for three months and is mis-issued - this limits the period of time that a mis-issued certificate could be used.

    Along with Must Staple and CT, this also helps address the issue of domain squatters buying a domain, getting a long-length certificate for it, and then selling the domain all the while having a valid certificate.

    There’s also plenty of good reasons aside from these to shorten a certificate’s length.

    Why Not Shorten It?

    Shorter certificate lifetimes have several benefits, so what are the reasons not to allow such a thing? We have a proven system to demonstrate that it’s automatable, and for more complex cases, it should be relatively painless to automate, right?

    That’s where I have to disagree, and why I’m rather hesitant to support this with the current state of certificate deployment.

    I’d like to tell a short story about a certificate I had to manage. It was for an HTTPS endpoint that a 3rd party used to upload data to us. The 3rd party required our endpoint to support HTTPS, and strangely while doing this integration they asked us to securely deliver the x509 certificate to them. When asked why, they said they pin to the certificate that we send them. They required pinning the leaf certificate. This means when we have to change our certificate, we need to coordinate with the 3rd party.

    Unfortunately, this 3rd party wasn’t exactly fast to perform these changes. We needed to coordinate days in advance with them, discuss the operations, and they actually counted the hours of work against our support contract.

    If this sounds ridiculous - I agree. But, it was the requirement. The 3rd party insisted on doing it - and talking with others they were frustrated by the same requirements. The certificate still needed to be issued by a CA - that is they would not pin against a self-signed certificate, etc. Also, this party had a monopoly on the data we wanted, so we didn’t have much choice there, either.

    This is one example of many that I can recount in an environment where renewing a certificate is not easy - or possible - to automate. Other situations involved an overly-complex CCRB where changing the certificate required a lot of operational testing, sign off, approvals, etc. Process can be fixed, but it’s more stubborn than some might realize. Other challenges are technology, like when an HSM is involved. Yes, it’s automatable - but it will take a lot of time for an organization to get there, and HSMs are unforgiving with mistakes.

    It’s also worth pointing out that I think a lot of people lose sight of the fact that certificates are used (often!) outside of HTTPS. TLS is a general purpose transport tunnel. You can encrypt all sorts of traffic with it - such as Remote Desktop, SQL Server, VPN, CAPWAP, etc. Some of these circumstances do require or use a certificate from a CA. While a web server might be easy to automate, other things are not.

    This would lead to a tripling of certificate replacement work.

    Quick Thoughts

    I’m not happy with the status quo, either. Certificates should be automatable, they should have a shorter lifespan - but we’re not quite there yet. I would argue that it would take some organizations months, or years of work to support automating their entire infrastructure. Yes, I think it would be a big benefit for organizations to have that anyway.

    Going from 39 months to 13 months is over ambitious at this point. I would test the waters of this with a change to 25 months to see how CA’s customers are able to cope with the change. That will also put the writing on the wall that they need to start automation before the 13 month limit is imposed.

    It’s hard to balance good security with what works in the real world. I just don’t think the real world is ready at this point for this change. Organizations are already scrambling to keep up with other changes. The TLS 1.2 requirement for PCI vendors already have them working hard.

    I do hope we get there one day though.

    * “Most” is used generally here - revocation checking behavior differs from environment to environment and the type of certificate, such as EV certificates.

  • Authenticode Sealing

    A while ago I wrote about Authenticode stuffing tricks. In summary, it allows someone to change small parts of a binary even after it has been signed. These changes wouldn’t allow changing how the program behaves, but do allow injecting tracking beacons into the file, even after it has been signed. I’d suggest reading that first if you aren’t familiar with it.

    This has been a criticism of mine about Authenticode, and recently I stumbled on a new feature in Authenticode, called sealing, that supposedly fixes two of the three ways that Authenticode allows post-signature changes.

    It looks like Authenticode sealing aims to make these stuffing tricks a lot harder. Before we dive in, I want to disclaim that sealing has literally zero documentation from Microsoft. Everything forward from here has been me “figuring it out”. I hope I’m right, but welcome corrections. I may be entirely wrong, so please keep that in mind.

    Recall that two ways of injecting data in to an Authenticode signature can be done in the signatures themselves, because not all parts of the signature are actually signed. This includes the certificate table as well as the unauthenticated attributes section of the signature. Sealing prevents those sections from changing once the seal has been made.

    It starts with an “intent to seal” attribute. Intent to seal is done when applying the primary signature to a binary. We can apply an intent to seal attribute using the /itos option with signtool. For example:

    signtool sign 
        /sha1 2d0366fa88640481456079fd864f3f02c8103867
        /fd sha256 /tr http://timestamp.digicert.com
        /td SHA256 /itos authlint.exe
    

    At this point the file has a primary signature and a timestamp, but the signature is not valid. It has been marked as “intent to seal” but no seal has been applied. Windows treats it as a bad signature if I try to run it.

    Run Intent to Seal

    Intent to seal is an authenticated attribute. That is, the signature at this point includes the intention in its own signature. I could not remove the intent to seal attribute without invalidating the whole signature.

    Now at this point I could add a nested signature, if I want, since the seal hasn’t been finalized. I’ll skip that, but it’s something you could do if you are using dual signatures.

    The next step is to seal it:

    signtool sign
        /sha1 2d0366fa88640481456079fd864f3f02c8103867
        /seal /tseal http://timestamp.digicert.com
        /td SHA256 authlint.exe
    

    This finishes off the seal and timestamps the seal. Note that I am using the same certificate as the one that was used in the primary signature. If I use a different certificate, the seal is applied by removing the entire signature, and re-signed with that certificate. Thus, you cannot seal a signature using a different certificate without changing the primary signature in the first place.

    Now we have a sealed signature. What happens if I try appending a signature using the /as option? I get an error:

    The file has a sealed signature. In order to append more signatures the seal will have to be removed and the file will have to be re-signed. The /force option must be specified as part of the command in order to do so.

    This is interesting because appended signatures are unauthenticated attributes, yet it breaks the seal. This means seals are signatures that account for unauthenticated attributes.

    What this all culminates to is that a seal is a signature of the entire signature graph, including the things that were being used to cheat Authenticode in the first place.

    Sealing appears to be an unauthenticated attribute itself which contains a signature, same for the timestamp. It wold seem that sealing is, in a strange way, Authenticode for Authenticode. The difference being is that a sealing signature has no concept of unauthenticated attributes, and it uses the certificates from the primary signature. That leaves no room for data to be inserted in to the signature once it has been sealed.

    To verify this, I first signed a binary without a seal, then changed an unauthenticated attribute, and noted that signtool verify /pa /all authlint.exe was still OK with the signature. With a seal, signtool verify /pa /all authlint-sealed.exe now failed when I changed the same unauthenticated attribute.

    This has some interesting uses. As a signer, it gives me more power to ensure my own signed binaries do not get tinkered with, or signatures get appended, or somehow inserting tracking beacons. If someone were to do so, they would invalidate the sealing signature. They cannot remove the seal because the primary signature has the Intent to Seal attribute, which cannot be removed, either. They can’t re-seal it with a different certificate without completely re-signing the primary signature, too.

    As a consumer of signed executables, this doesn’t make a huge impact on me, yet. It would be interesting and exciting to see Windows’s security UX take sealing in to consideration. The UAC and Mark-of-the-Web dialogs could conceivably give a more secure indicator if the file is sealed. This would mean that for authors to insert tracking data in to their binaries, they would have to completely re-sign the executable, which is expensive and why they don’t do it in the first place.

    As a reminder, these are my observations of sealing. There is no documentation about sealing that I am aware of, but based on the behavior that I observed, it has some very powerful properties. I hope that it becomes better documented and encouraged, and eventually more strictly enforced.

    As for using sealing, I would hold off for now. Its lack of documentation expresses that it may not be fully ready for use yet, but it will be interesting to see where this goes.

  • .NET Core CI with Surf

    I started taking a look at Paul Bett’s Surf project to do builds for things I work on. Currently I have things building in various other places, like Travis CI, Circle CI, etc. All of these options have one thing in common: they run your build in a Linux container that gets started on every build.

    This worked fine for me, in fact I was really impressed with both services. But part of the build process was getting the environment in the right state. Installing packages with apt-get, pulling down some sources, building and installing them with make, etc. This got to the point where 90% of the build time was going to preparing the environment for the build. It eventually came to the point where we needed to be able to build our own container with all of the prerequisites already on it. We also needed something to do the actual building.

    Enter Surf. Surf gives us exactly what Travis CI gave us. It checks out your repository, runs a build, and updates the GitHub PR status. That’s it. It’s hugely appealing because it’s stateless, built on node.js, and doesn’t even have a GUI. Contrast this with something like TeamCity or Jenkins, where you need to setup a database, spend time configuring remotes, builds, etc, finding the right plugins to update GitHub PR statuses, etc. Since Surf is stateless and very simple, it also made some sense to run it in a container.

    Installing Surf is simple enough. It’s just a npm install -g surf-build. There isn’t anything more to it.

    There are two commands that surf gives that are of interest at this point: surf-build and surf-run.

    Surf-Build

    surf-build is the command that will actually check out a your git repository and run a build. Surf will try its best to figure out how to build your project for you, but the option that works best for me is to just have a file called build.sh (or .ps1 on Windows) in the root of your repository. Whatever you put in your build script is how your project gets built. It could run MSBuild, Cake, Make, etc. If the exit code is zero, your build passed.

    surf-build by itself simply just runs the build with the git hash you give it. It works like this:

    surf-build \
        -s 56920f57db4afba1262b6969f577aaedd5e48b36 \
        -r https://github.com/vcsjones/AuthenticodeLint.Core
    

    As always, I experiment with new ideas on my own projects first. This will run my build on the Git hash with the GitHub repository. That’s all it takes.

    Surf in a Docker image is especially useful because I can have my whole build environment wherever I am. If I have surf in a Docker container, all I need to do is pull-down my docker image (or build it locally) and simply do this:

    docker run -e 'GITHUB_TOKEN=<github token>' \
        -t 720adcff1217 \
        surf-build \
        -s 56920f57db4afba1262b6969f577aaedd5e48b36 \
        -r https://github.com/vcsjones/AuthenticodeLint.Core
    

    A few things. surf-build expects an environment variable called GITHUB_TOKEN to be able to update the pull-request status. It will also use this token to publish a secret gist of the build’s log. If you omit the GITHUB_TOKEN, Surf will still build it, but only if the repository is public, and it won’t set a pull-request status.

    Surf-Run

    surf-build is fine and all, but it’s entirely manual. We don’t want to have to run surf-build ourselves, we want to have surf watch our repository and run surf-build for us. Enter surf-run. This command does exactly what I want - it runs surf-build, or any command really, whenever there is a new pull request, or when a commit is added to an existing pull request.

    It works like this:

    surf-run \
        -r https://github.com/vcsjones/AuthenticodeLint.Core \
        -- surf-build -n 'surf-netcore-1.0.1'
    

    surf-run watches the repository we specify, and starts whatever process you want, as specified after then --. It also sets two environment variables, SURF_SHA1 and SURF_REPO. This is how surf-build knows what git hash to build instead of being passed in with the -s and -r switches.

    Running in Docker

    My Docker image needs a few things. It needs node.js to run Surf, it also needs .NET Core, to start. I needed to pick a base image, so I went with nodejs:boron which is the 6.x LTS for node. I chose this instead of one of the .NET Core images because I found that installing .NET Core from scratch on an image was actually easier than installing node.js. Now I need to put together a Dockerfile with everything I need. To start I need all of the dependencies:

    RUN apt-get install -y --no-install-recommends \
        curl \
    	fakeroot \
    	libunwind8 \
    	gettext \
    	build-essential \
    	ca-certificates \
    	git
    

    Some of these are dependencies I need for some projects, others are needed by surf or .NET Core, like libunwind8. These are the commands to install .NET Core 1.0.1 on Debian Jessie, as verbatim from the Microsoft install instructions:

    RUN curl -sSL -o dotnet.tar.gz https://go.microsoft.com/fwlink/?LinkID=827530 \
        && mkdir -p /opt/dotnet && tar zxf dotnet.tar.gz -C /opt/dotnet \
        && ln -s /opt/dotnet/dotnet /usr/local/bin
    

    This next step is a bit of a work around. I wanted my images as ready-to-go as possible before actually running them. The dotnet command will do some “first run” activities, like pulling down a bunch of nuget packages for the .NET Core runtime. To do this when making the Docker image, I simply create a new .NET Core project with dotnet new in the temp directory, then remove it.

    RUN mkdir -p /var/tmp/dotnet-prime \
        && cd /var/tmp/dotnet-prime && dotnet new && cd ~ \
        && rm -rf /var/tmp/dotnet-prime
    

    There is an open issue on GitHub to facilitate this first-run behavior without side effects, like creating a new project or needing a dummy project.json to restore.

    Next, we install Surf:

    RUN npm install -g surf-build@1.0.0-beta.15
    

    I locked to beta.15 of surf right now, but that might not be something you want to do.

    Finally, we specify our command:

    CMD surf-run \
    	-r https://github.com/vcsjones/AuthenticodeLint.Core \
    	-- surf-build -n 'surf-netcore-1.0.1'
    

    Now we have a Dockerfile for .NET Core with surf on it. With my Docker image running, I tested a pull request:

    Surf Status

    Success! This is exactly what I wanted. Surf publishes the build log as a gist, a simple way to view logs.

    Surf Logs

    The actual build script in build.sh is a simple dotnet restore and then dotnet test in the test directory. As far as the container itself, I have it running in AWS ECS which works well enough.

    All in all I’m super happy with surf. It does nothing more than I need it to, and I don’t have anything complex set up. If the container instance starts misbehaving, I can terminate it and let another takes its place. Having everything in a container also means my whole build environment is portable.

  • Making Sense of the .NET CLI

    I’ve been using the .NET Core CLI for a while now, and lurk on the GitHub issues. I’ve seen that some of the aspects of it are a little difficult to understand, especially if you want to contribute to it.

    What .NET Version Am I Using?

    There’s been a number of issues filed where people are trying to interpret the output of dotnet --version, which today looks something like “1.0.0-preview2-1-003177”. Quite often, the user just installed .NET Core 1.1, then did --version to see that the update took, but then still noticed that it said something like “1.0.0-preview2-1-003177”. What gives?

    The first thing to point out is that the Tooling and the Runtime are two different versions. The Tooling has not yet release in 1.0.0 form. The Runtime however, is at 1.1.0 as of writing.

    In short, dotnet --version is the version of the tooling. If you want the version of the Core Host, then dotnet is the correct option. It will print something like this:

    Microsoft .NET Core Shared Framework Host

    Version : 1.1.0

    Finally, there is the --info option. This prints some additional information about the runtime environment it thinks you are running, such as the RID, and OS info.

    There is an issue on GitHub to make --info better. I would encourage feedback on that issue if all of this seems confusing to you.

    SDK versions and the Muxer

    .NET CLI allows installing multiple versions. In macOS, you can list them in the directory /usr/local/share/dotnet/sdk/. Which version is used currently depends on your global.json for your project.

    global.json allows specifying an SDK version. If global.json doesn’t declare what SDK version it should use, the maximum, non-preview version will be used. Today, we don’t have any versions that aren’t preview, so it’s whatever the maximum version you have installed is.

    If you do specify a version, like this:

    {
        "sdk": {
            "version": "1.0.0-preview2-1-003177"
        }
    }
    

    Then that version of the SDK will be used, even if I have 1.0.0-preview4-004130 installed.

    This process is handled by the muxer. The muxer’s responsibility is to bootstrap the SDK and tooling version. The first thing the muxer does is walk down the directory structure looking for a global.json and an “sdk” to use. If it finds one and the version is valid, the muxer loads that SDK’s path and tooling.

    It’s worth pointing out that global.json affects everything. If you’re in a directory that has a global.json, then everything respects that version of the SDK. If I run dotnet --info or dotnet in a directory that has an SDK, it will behave exactly as that version of the SDK.

    This makes it easy to have projects use different SDKs by specifying the global.json at the project root. This means I can have the preview4 nightly toolings installed, all of which use csproj for projects, but also continue to build project.json style projects.

    The last thing to remember is that the muxer looks for global.json down the directory structure. So if a parent directory, or parent’s parent directory has one of these files, it will be respected. The “nearest” global.json is honored.

    An icky quirk of the muxer is that it silently fails. If you ask to use an SDK version that doesn’t exist, it will just behave as if you didn’t specify one in the first place.

    Running Applications

    You’ve probably noticed that when you compile and publish an application, it does not include a native executable (ready-to-run). It produces a DLL.

    If you want to run a project, use dotnet run.

    If you want to run a compiled DLL, use dotnet myapp.dll.

    Doing dotnet run myapp.dll looks right, and it might work, but it might not do what you expect. It runs a project, and passes myapp.dll as an argument to Main. If you happen to have a project.json in your working directory, then it is running that.