I recently started a little tool called OpenVsixSignTool, which is an open source implementation of signing a VSIX package. If that sounds boring to you, I agree! It has a niche audience, considering that Microsoft already makes a tool for signing VSIX packages, which are extensions for Visual Studio. Why an OSS version of it?
The idea came from Oren Novotny, so kudos to him for wanting to make signing VSIX packages better. Oren encountered some limitations of the existing sign tool, and implementing a new one from scratch wasn’t an entirely crazy idea.
The limitation came down to where the existing VsixSignTool was willing to look for a certificate and private key to sign with. The Microsoft VsixSignTool requires PFX file with the public and private key to sign, or a certificate in a P7B file with the certificate and private key in the certificate store.
Ideally, it could do a few new things. The first is have the same behavior as the Authenticode
signtoolwhere it takes a simple SHA1 thumbprint of the certificate to sign with and finds it in the certificate store. No more P7B file. The second is an entirely new idea, which is to use Azure Key Vault. Azure Key Vault supports certificates and keeping the private key in an HSM, which OpenVisxSignTool does support.
It’s still being built, but the rough functionality is there. If signing VSIX packages is something you want to do, give it a try and let me know how it can be better.
The only official way to Authenticode sign a file on Windows is using the very flexible “signtool” as part of the Windows SDK. Signtool is capable of signing a variety of things, such as portable executables, MSIs, etc with a variety of different digest algorithms, timestamps, and the like.
One area of signtool that has not been flexible is where it looks for the private key to perform the signature. If the private key was not associated to a certificate in the certificate store, signtool would be unable to use this.
This meant that the private key needed support from CAPI or CNG. If the private key was not reachable through a CSP or CNG Store Provider, then sign tool would not be able to use the key. For the most part, this was OK. Most SmartCard and HSM vendors provide a CSP and/or CNG Provider, so signtool worked fine.
Sometimes though, a CNG or CSP provider is not available. A practical case for this is Azure Key Vault. In this situation, using signtool was not possible, until recently.
Starting in the Windows 10 SDK, two new command line switches are available,
di. Recall that a signature is always performed on a hash on Authenticode. The
dgoption changes signtool’s behavior to output a digest that you can sign using anything you’d like. Let’s try this on a copy of notepad.exe.
signtool sign /dg "C:\scratch\dir" /fd SHA256 /f public-cert.cer notepad.exe
This takes a file to a public certificate - there is no key in public-cert.cer. You could also use the
/sha1option to specify a certificate in the certificate store that also has only a public key. This will output a few files in the “C:\scratch\dir” directory. The digest is the one with the “.dig” extension. This file will have the Base64 encoded digest to sign. Next, using your custom tool, sign the digest with the private key for the certificate. You should decode the Base64 signature before signing if the signing API expects a raw binary digest.
Next, encode your signature in base64 and place it in a file in the “C:\scratch\dir” directory with the same name as the digest file, with the “signed” extension. For example, “notepad.exe.dig.signed”.
The next step is to ingest the signed digest along with the rest of the Authenticode signature to complete the signing.
signtool sign /di "C:\scratch\dir" notepad.exe
This will complete the signing process, and we now have our own signed copy of notepad.exe. Appending a signature is done just as before, except with the
This provides great flexibility for signers to use non CSP / CNG signing options, or offloading the signing process. Signtool can now also sign just a plain digest file using the
/dsoption. If you have a dedicated server for performing Authenticode signing, you can now use the
/dioptions so that only a very small file needs to be moved to the signing server, instead of the entirely binary if they are large in size.
Incase you weren’t aware, Yubico launched a USB-C version of their popular Yubikey device. On launch day, the 13th, I paid for 3 of them and eagerly awaited their arrival.
I’ve recently just finished up a laptop refresh at the house, which means “MacBook <something>”. For light use and travel, I have a MacBook with its singular USB-C port. For heavier things, I have a MacBook Pro with Thunderbolt 3, which gives me 4 USB-C ports. I have no laptops with USB-A connections anymore.
If you have all or mostly USB-C in your life, then the 4C is a great companion and works just as well in its previous form.
The 4C can go on a key ring, just like the 4 could. Their sizes are noticeably different though. The 4C is smaller in width and height, at the expense of it being thicker.
I find the thickness just slightly troublesome when it’s attached to a key ring. The previous one left just enough space for the key ring to jut out from. With the additional thickness, I now have to prop my laptop up, put it on a stand, or find a new solution for the key ring. However the smaller size is a welcome change since it’s permanently affixed to my key chain.
It’s identical to the original 4. It’s worth noting however that you can’t clone one Yubikey to another, so you may have to use both for a while during a transition phase. This includes the actual Yubico OTP functionality, and any additional configuration you have have loaded in to the second slot, PIV certificates, etc. I opted to re-key my PIV certificate and replace it.
I did have a lot of trouble with the Yubikey Personalization Tool. On one Mac it works fine, on another it does not. On Windows it always seems to work. This wasn’t unique to the Yubikey 4C, either.
If you are in a pure USB-C environment, or mostly so, then this is a great upgrade. No little adapters to lose. If however you have a mix of USB-C and USB-A, you might want to stick with USB-A for a while. There are plenty of adapters that allow you go to from USB-A to USB-C, but the reverse doesn’t exist, and that’s intentional. Since USB-C can do power delivery, plugging a USB-C device in to a USB-A port might damage the USB-A port, so the USB-IF does not allow such things to get certified.
There’s been some discussion recently about how long an x509 certificate should be valid for if they were issued by a Certificate Authority that is a member of the CA/B Forum.
Currently, the limit is 39 months, or three and a quarter years. This means that operationally, a certificate from a CA must be changed at least every 39 months. The discussion proposed shortening that length to 13 months.
Why Shorten It?
While Let’s Encrypt is lauded for being free, the most impressive aspect of it is that it can be - and is easy - to fully automate. Let’s Encrypt makes CertBot, a piece of software you install on your server that sets up HTTPS for various web servers, and handles renewals, domain validation, etc. Since this is fully automated, the validity period of a certificate is inconsequential - the certificate could be valid for a single day as long as it keeps getting renewed and replaced correctly.
This has a lot of positives. A short lifespan of a certificate means revocation is less of a concern. Revocation in PKI largely doesn’t work in HTTPS simply because that in most* cases, online revocation checking isn’t performed. We have tools coming soon that will help fix that like Must Staple, but those are still a ways off from being widely deployed and adopted. If a certificate is only valid for three months and is mis-issued - this limits the period of time that a mis-issued certificate could be used.
Along with Must Staple and CT, this also helps address the issue of domain squatters buying a domain, getting a long-length certificate for it, and then selling the domain all the while having a valid certificate.
There’s also plenty of good reasons aside from these to shorten a certificate’s length.
Why Not Shorten It?
Shorter certificate lifetimes have several benefits, so what are the reasons not to allow such a thing? We have a proven system to demonstrate that it’s automatable, and for more complex cases, it should be relatively painless to automate, right?
That’s where I have to disagree, and why I’m rather hesitant to support this with the current state of certificate deployment.
I’d like to tell a short story about a certificate I had to manage. It was for an HTTPS endpoint that a 3rd party used to upload data to us. The 3rd party required our endpoint to support HTTPS, and strangely while doing this integration they asked us to securely deliver the x509 certificate to them. When asked why, they said they pin to the certificate that we send them. They required pinning the leaf certificate. This means when we have to change our certificate, we need to coordinate with the 3rd party.
Unfortunately, this 3rd party wasn’t exactly fast to perform these changes. We needed to coordinate days in advance with them, discuss the operations, and they actually counted the hours of work against our support contract.
If this sounds ridiculous - I agree. But, it was the requirement. The 3rd party insisted on doing it - and talking with others they were frustrated by the same requirements. The certificate still needed to be issued by a CA - that is they would not pin against a self-signed certificate, etc. Also, this party had a monopoly on the data we wanted, so we didn’t have much choice there, either.
This is one example of many that I can recount in an environment where renewing a certificate is not easy - or possible - to automate. Other situations involved an overly-complex CCRB where changing the certificate required a lot of operational testing, sign off, approvals, etc. Process can be fixed, but it’s more stubborn than some might realize. Other challenges are technology, like when an HSM is involved. Yes, it’s automatable - but it will take a lot of time for an organization to get there, and HSMs are unforgiving with mistakes.
It’s also worth pointing out that I think a lot of people lose sight of the fact that certificates are used (often!) outside of HTTPS. TLS is a general purpose transport tunnel. You can encrypt all sorts of traffic with it - such as Remote Desktop, SQL Server, VPN, CAPWAP, etc. Some of these circumstances do require or use a certificate from a CA. While a web server might be easy to automate, other things are not.
This would lead to a tripling of certificate replacement work.
I’m not happy with the status quo, either. Certificates should be automatable, they should have a shorter lifespan - but we’re not quite there yet. I would argue that it would take some organizations months, or years of work to support automating their entire infrastructure. Yes, I think it would be a big benefit for organizations to have that anyway.
Going from 39 months to 13 months is over ambitious at this point. I would test the waters of this with a change to 25 months to see how CA’s customers are able to cope with the change. That will also put the writing on the wall that they need to start automation before the 13 month limit is imposed.
It’s hard to balance good security with what works in the real world. I just don’t think the real world is ready at this point for this change. Organizations are already scrambling to keep up with other changes. The TLS 1.2 requirement for PCI vendors already have them working hard.
I do hope we get there one day though.
* “Most” is used generally here - revocation checking behavior differs from environment to environment and the type of certificate, such as EV certificates.
A while ago I wrote about Authenticode stuffing tricks. In summary, it allows someone to change small parts of a binary even after it has been signed. These changes wouldn’t allow changing how the program behaves, but do allow injecting tracking beacons into the file, even after it has been signed. I’d suggest reading that first if you aren’t familiar with it.
This has been a criticism of mine about Authenticode, and recently I stumbled on a new feature in Authenticode, called sealing, that supposedly fixes two of the three ways that Authenticode allows post-signature changes.
It looks like Authenticode sealing aims to make these stuffing tricks a lot harder. Before we dive in, I want to disclaim that sealing has literally zero documentation from Microsoft. Everything forward from here has been me “figuring it out”. I hope I’m right, but welcome corrections. I may be entirely wrong, so please keep that in mind.
Recall that two ways of injecting data in to an Authenticode signature can be done in the signatures themselves, because not all parts of the signature are actually signed. This includes the certificate table as well as the unauthenticated attributes section of the signature. Sealing prevents those sections from changing once the seal has been made.
It starts with an “intent to seal” attribute. Intent to seal is done when applying the primary signature to a binary. We can apply an intent to seal attribute using the
signtool. For example:
signtool sign /sha1 2d0366fa88640481456079fd864f3f02c8103867 /fd sha256 /tr http://timestamp.digicert.com /td SHA256 /itos authlint.exe
At this point the file has a primary signature and a timestamp, but the signature is not valid. It has been marked as “intent to seal” but no seal has been applied. Windows treats it as a bad signature if I try to run it.
Intent to seal is an authenticated attribute. That is, the signature at this point includes the intention in its own signature. I could not remove the intent to seal attribute without invalidating the whole signature.
Now at this point I could add a nested signature, if I want, since the seal hasn’t been finalized. I’ll skip that, but it’s something you could do if you are using dual signatures.
The next step is to seal it:
signtool sign /sha1 2d0366fa88640481456079fd864f3f02c8103867 /seal /tseal http://timestamp.digicert.com /td SHA256 authlint.exe
This finishes off the seal and timestamps the seal. Note that I am using the same certificate as the one that was used in the primary signature. If I use a different certificate, the seal is applied by removing the entire signature, and re-signed with that certificate. Thus, you cannot seal a signature using a different certificate without changing the primary signature in the first place.
Now we have a sealed signature. What happens if I try appending a signature using the
/asoption? I get an error:
The file has a sealed signature. In order to append more signatures the seal will have to be removed and the file will have to be re-signed. The /force option must be specified as part of the command in order to do so.
This is interesting because appended signatures are unauthenticated attributes, yet it breaks the seal. This means seals are signatures that account for unauthenticated attributes.
What this all culminates to is that a seal is a signature of the entire signature graph, including the things that were being used to cheat Authenticode in the first place.
Sealing appears to be an unauthenticated attribute itself which contains a signature, same for the timestamp. It wold seem that sealing is, in a strange way, Authenticode for Authenticode. The difference being is that a sealing signature has no concept of unauthenticated attributes, and it uses the certificates from the primary signature. That leaves no room for data to be inserted in to the signature once it has been sealed.
To verify this, I first signed a binary without a seal, then changed an unauthenticated attribute, and noted that
signtool verify /pa /all authlint.exewas still OK with the signature. With a seal,
signtool verify /pa /all authlint-sealed.exenow failed when I changed the same unauthenticated attribute.
This has some interesting uses. As a signer, it gives me more power to ensure my own signed binaries do not get tinkered with, or signatures get appended, or somehow inserting tracking beacons. If someone were to do so, they would invalidate the sealing signature. They cannot remove the seal because the primary signature has the Intent to Seal attribute, which cannot be removed, either. They can’t re-seal it with a different certificate without completely re-signing the primary signature, too.
As a consumer of signed executables, this doesn’t make a huge impact on me, yet. It would be interesting and exciting to see Windows’s security UX take sealing in to consideration. The UAC and Mark-of-the-Web dialogs could conceivably give a more secure indicator if the file is sealed. This would mean that for authors to insert tracking data in to their binaries, they would have to completely re-sign the executable, which is expensive and why they don’t do it in the first place.
As a reminder, these are my observations of sealing. There is no documentation about sealing that I am aware of, but based on the behavior that I observed, it has some very powerful properties. I hope that it becomes better documented and encouraged, and eventually more strictly enforced.
As for using sealing, I would hold off for now. Its lack of documentation expresses that it may not be fully ready for use yet, but it will be interesting to see where this goes.
Previous Page: 1 of 14 Next