• Generating .lib with Rust build scripts

    Something I’ve been working on in my spare time is porting Azure SignTool to Rust. I’ve yet to make up mind if Rust is the one-true way forward with that, but that’s a thought for another day.

    I wanted to check out the feasibility of it. I’m happy to say that I think all of the concepts necessary are there, they just need to be glued together.

    One roadblock with Azure SignTool is that it needs to use an API, SignerSignEx3, which isn’t included in the Windows SDK. In fact, just about nothing in mssign32 is in the Windows SDK. Not being in the Windows SDK means no headers, and no .lib to link against.

    For .NET developers, no .lib for linking hasn’t really mattered when consuming Win32 APIs. It simply needs the ordinal or name of the export and the CLR takes care of the rest with platform invoke. For languages like C that use a linker, you need a .lib to link against. Rust is no different.

    For most cases, the winapi crate has all of the Win32 functions you need. It’s only in the case of APIs that are not in the Windows SDK (or like SignerSignEx3, entirely undocumented) that an API will not be in the crate.

    We need to call SignerSignEx3 without something to link against. We have a few different options.

    1. Use LoadLibrary(Ex) and GetProcAddress.
    2. Make our own .lib.

    The latter seemed appealing because then the Rust code can continue to look clean.

    #[link(name = "mssign32")]
    extern {
        fn SignerSignEx3(...)

    Making a .lib that contains exports only is not too difficult. We can define our own .def file like so:

    LIBRARY mssign32

    and use lib.exe to convert it to a linkable lib file:

    lib.exe /MACHINE:X64 /DEF:mssign32.def /OUT:mssign32.lib

    If we put this file somewhere that the Rust linker can find it, our code will compile successfully and we’ll have successfully linked.

    Dependency Walker with azure_sign_tool_rs

    I wasn’t thrilled about the idea of checking in an opaque binary in to source for building, so I sought an option to make it during the rust build process.

    Fortunately, cargo makes that easy with build scripts. A build script is a rust file itself named build.rs in the same directory as your Cargo.toml file. It’s usage is simple:

    fn main() {
        // Build script

    Crucially, if you write to stdout using println!, the build process will recognize certain output as commands to modify the build process. For example:

    println!("cargo:rustc-link-search={}", "C:\\foo\\bar");

    Will add a path for the linker to search. We can begin to devise a plan to make this part of a build. We can in our build call out to lib.exe to generate a .lib to link against, shove it somewhere, and add the directory to to linker’s search path.

    The next trick in our build script will be to find where lib.exe is. Fortunately, the Rust toolchain already solves this since it relies on link.exe from Visual Studio anyway, so it knows how to find SDK tooling (which move all over the place between Visual Studio versions). The cc crate makes this easy for us.

    let target = env::var("TARGET").unwrap();
    let lib_tool = cc::windows_registry::find_tool(&target, "lib.exe")
                .expect("Could not find \"lib.exe\". Please ensure a supported version of Visual Studio is installed.");

    The TARGET environment variable is set by cargo and contains the architecture the build is for, since Rust can cross-compile. Conveniently, we can use this to support cross-compiled builds of azure_sign_tool_rs so that we can make 32-bit builds on x64 Windows and x64 builds on 32-bit Windows. This allows us to modify the /MACHINE argument for lib.exe.

    I wrapped that up in to a helper in case I need to add additional libraries.

    enum Platform {
    impl std::fmt::Display for Platform {
        fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
            match *self {
                Platform::X64 => write!(f, "X64"),
                Platform::X86 => write!(f, "X86"),
                Platform::ARM => write!(f, "ARM"),
                Platform::ARM64 => write!(f, "ARM64"),
    struct LibBuilder {
        pub platform : Platform,
        pub lib_tool : cc::Tool,
        pub out_dir : String
    impl LibBuilder {
        fn new() -> LibBuilder {
            let target = env::var("TARGET").unwrap();
            let out_dir = env::var("OUT_DIR").unwrap();
            let platform =
                if target.contains("x86_64") { Platform::X64 }
                else if target.contains("ARM64") { Platform::ARM64 }
                else if target.contains("ARM") { Platform::ARM }
                else { Platform::X86 };
            let lib_tool = cc::windows_registry::find_tool(&target, "lib.exe")
                .expect("Could not find \"lib.exe\". Please ensure a supported version of Visual Studio is installed.");
            LibBuilder {
                platform : platform,
                lib_tool : lib_tool,
                out_dir : out_dir
        fn build_lib(&self, name : &str) -> () {
            let mut lib_cmd = self.lib_tool.to_command();
                .arg(format!("/MACHINE:{}", self.platform))
                .arg(format!("/DEF:build\\{}.def", name))
                .arg(format!("/OUT:{}\\{}.lib", self.out_dir, name));
            lib_cmd.output().expect("Failed to run lib.exe.");

    Then our build script’s main can contain this:

    fn main() {
        let builder = LibBuilder::new();
        println!("cargo:rustc-link-search={}", builder.out_dir);

    After this, I was able to link against mssign32.

    Note that, since this entire project is Windows’s specific and has zero chance of running anywhere, I did not bother to decorate anything with #[cfg(target_os = "windows")]. If you are attempting to make a cross-platform project, you’ll want to account for all of this in the Windows-specific parts.

    With this, I now only need to check in a .def text file and Cargo will take care of the rest.

  • Caddy

    This is my first post with my blog running Caddy. In short, it’s a web server with a focus on making HTTPS simple. It accomplishes this by supporting ACME out of the box. ACME is the protocol that Let’s Encrypt uses. Technically, Caddy supports any Certificate Authority that supports ACME. Practically, few besides Let’s Encrypt do, though I am aware of other CAs making an effort to support issuance with ACME.

    Though I’ve seen lots of praise for Caddy and its HTTPS ALL THE THINGS mantra for a while now, I never really dug in to it until recently. I was actually grabbed by several of its other features that I really liked.

    Configuration is simple. That isn’t always a good thing. Simple usually means advanced configuration or features is lost in the trade off. Fortunately, this does not seem to be the case with Caddy, for me. I am sure it may be for others. When evaluating Caddy, there were a number of things nginx was taking care of besides serving static content.

    1. Rewrite to WebP if the user agent accepts WebP.
    2. Serve pre-compressed gzip files if the user agent accepts it.
    3. Serve pre-compressed brotli files if the user agent accepts it.
    4. Take care of some simple redirects.
    5. Flexible TLS configuration around cipher suites, protocols, and key exchanges.

    Caddy does all of those. It also does them better. Points two and three Caddy just does. It’ll serve gzip or brotli if the user agent is willing to accept them if a pre-compressed version of the file is on disk.

    Rewriting to WebP was easy:

    header /images {
        Vary Accept
    rewrite /images {
        ext .png .jpeg .jpg
        if {>Accept} has image/webp
        to {path}.webp {path}

    The configuration does two things. First, it adds the Vary: Accept header to all responses under /images. This is important if a proxy or CDN is caching assets. The second part says, if the Accept header contains “image/webp”, rewrite the response to “{path}.webp”, so it will look for “foo.png.webp” if a browser requests “foo.png”. The second {path} means use the original if there is no webp version of the file. Nginx on the other hand, was a bit more complicated.

    HTTPS / TLS configuration is simple and well documented. As the documentation points out, most people don’t need to do anything other than enable it. It has sensible defaults, and will use Let’s Encrypt to get a certificate.

    I’m optimistic about Caddy. I think it’s a very nice web server / reverse proxy. I spent about an hour moving my 400 lines of nginx configuration to 51 lines of Caddy configuration.

    I’d recommend giving it a shot.

  • Azure SignTool

    A while ago, Oren Novotny and I started exploring the feasibility of doing Authenticode signing with Azure Key Vault. Azure Key Vault lets you do some pretty interesting things, including which lets you treat it as a pseudo network-attached HSM.

    A problem with Azure Key Vault though is that it’s an HTTP endpoint. Integrating it in to existing standards like CNG or PKCS#11 hasn’t been done yet, which makes it difficult to use in some cases. Specifically, tools that wanted to use a CSP or CNG provider, like Authenticode signing.

    Our first attempt at getting this working was to see if we could use the existing signtool. A while ago, I wrote about using some new options in signtool that let you sign the digest with whatever you want in my post Custom Authenticode Signing.

    This made it possible, if not a little unwieldy, to sign things with Authenticode and use Azure Key Vault as the signing source. As I wrote, the main problem with it was you needed to run signtool twice and also develop your own application to sign a file with Azure Key Vault. The steps went something like this.

    1. Run signtool with /dg flag to produce a base64-encoded digest to sign.
    2. Produce signature for that file using Azure Key Vault using custom tool.
    3. Run signtool again with /di to ingest the signature.

    This was, in a word, “slow”. The dream was to be able to produce a signing service that could sign files in bulk. While a millisecond or two may not be the metric we care about, this was costing many seconds. It also let us feeling like the solution was held together by shoestrings and bubblegum.


    However, signtool mysteriously mentions a flag called /dlib. It says it combines /dg and /di in to a single operation. The documentation, in its entirety, is this:

    Specifies the DLL implementing the AuthenticodeDigestSign function to sign the digest with. This option is equivalent to using SignTool separately with the /dg, /ds, and /di switches, except this option invokes all three as one atomic operation.

    This lacked a lot of detail, but it seems like it is exactly what we want. We can surmise though that the value to this flag is a path to a library that exports a function called AuthenticodeDigestSign. That is easy enough to do. However, it fails to mention what is passed to this function, or what we should return to it.

    This is not impossible to figure out if we persist with windbg. To make a long story short, the function looks something like this:

    HRESULT WINAPI AuthenticodeDigestSign(
        CERT_CONTEXT* certContext,
        void* unused,
        ALG_ID algId,
        BYTE* pDigestToSign,
        DWORD cDigestToSign,
        CRYPTOAPI_BLOB* signature

    With this, it was indeed possible to make a library that signtool would call this function for signing the digest. Oren put together a C# library that did exactly that on GitHub under KeyVaultSignToolWrapper. I even made some decent progress on a rust implementation.

    This was a big improvement. Instead of multiple invocations to signtool, we can do this all at once. This still presented some problems though. The first being that there was no way to pass any configuration to it with signtool. The best we could come up with was to wrap the invocation of signtool and set environment variables in the signtool process, and let this get its configuration from environment variables, such as which vault to authenticate to, and how to authenticate. A final caveat was that this still depended on signtool. Signtool is part of the Windows SDK, which technically doesn’t allow us to distribute it in pieces. If we wanted to use signtool, we would need to install parts of the entire Windows SDK.


    Later, I noticed that Windows 10 includes a new signing API, SignerSignEx3. I happened upon this when I was using windbg in AuthenticodeDigestSign and saw that the caller of it was SignerSignEx3, not signtool. I checked out the exports in mssign32 and did see it as a new export starting in Windows 10. The natural conclusion was that Windows 10 was shipping a new API that is capable of using callbacks for signing the digest and signtool wasn’t doing anything special.

    As you may have guessed, SignerSignEx3 is not documented. It doesn’t exist in Microsoft Docs or in the Windows SDK headers. Fortunately, SignerSignEx2 was documented, so we weren’t starting from scratch. If we figured out SignerSignEx3, then we could skip signtool completely and develop our own tool that does this.

    SignerSignEx3 looks very similar to SignerSignEx2:

    // Not documented
    typedef HRESULT (WINAPI *SignCallback)(
        CERT_CONTEXT* certContext,
        PVOID opaque,
        ALG_ID algId,
        BYTE* pDigestToSign,
        DWORD cDigestToSign,
        CRYPT_DATA_BLOB* signature
    // Not documented
    typedef struct _SIGN_CALLBACK_INFO {
        DWORD cbSize;
        SignCallback callback;
        PVOID opaque;
    HRESULT WINAPI SignerSignEx3(
        DWORD                  dwFlags,
        SIGNER_SUBJECT_INFO    *pSubjectInfo,
        SIGNER_CERT            *pSignerCert,
        SIGNER_SIGNATURE_INFO  *pSignatureInfo,
        SIGNER_PROVIDER_INFO   *pProviderInfo,
        DWORD                  dwTimestampFlags,
        PCSTR                  pszTimestampAlgorithmOid,
        PCWSTR                 pwszHttpTimeStamp,
        PCRYPT_ATTRIBUTES      psRequest,
        PVOID                  pSipData,
        SIGNER_CONTEXT         **ppSignerContext,
        PCERT_STRONG_SIGN_PARA pCryptoPolicy,
        SIGN_CALLBACK_INFO     *signCallbackInfo,
        PVOID                  pReserved

    Reminder: These APIs are undocumented. I made a best effort at reverse engineering them, and to my knowledge, function. I do not express any guarantees though.

    There’s a little more to it than this. First, in order for the callback parameter to even be used, there’s a new flag that needs to be passed in. The value for this flag is 0x400. If this is not specified, the signCallbackInfo parameter is ignored.

    The usage is about what you would expect. A simple invocation might work like this:

    HRESULT WINAPI myCallback(
        CERT_CONTEXT* certContext,
        void* opaque,
        ALG_ID algId,
        BYTE* pDigestToSign,
        DWORD cDigestToSign,
        CRYPT_DATA_BLOB* signature)
        //Set the signature property
        return 0;
    int main()
        SIGN_CALLBACK_INFO callbackInfo = { 0 };
        callbackInfo.cbSize = sizeof(SIGN_CALLBACK_INFO);
        callbackInfo.callback = myCallback;
        HRESULT blah = SignerSignEx3(0x400, /*omitted*/ callbackInfo, NULL);
        return blah;

    When the callback is made, the signature parameter must be filled in with the signature. It must be heap allocated, but it can be freed after the call to SignerSignEx3 completes.


    We’re not quite done yet. The solution above works with EXEs, DLLs, etc - it does not work with APPX packages. This is because signing an APPX requires some additional work. Specifically, the APPX Subject Interface Package requires some additional data be supplied in the pSipData parameter.

    Once again we are fortunate that there is some documentation on how this works with SignerSignEx2, however the details here are incorrect for SignerSignEx3.

    Unfortunately, the struct shape is not documented for SignerSignEx3.

    To the best of my understanding, SIGNER_SIGN_EX3_PARAMS structure should look like this:

    typedef _SIGNER_SIGN_EX3_PARAMS {
        DWORD                   dwFlags;
        SIGNER_SUBJECT_INFO     *pSubjectInfo;
        SIGNER_CERT             *pSigningCert;
        SIGNER_SIGNATURE_INFO   *pSignatureInfo;
        SIGNER_PROVIDER_INFO    *pProviderInfo;
        DWORD                   dwTimestampFlags;
        PCSTR                   pszTimestampAlgorithmOid;
        PCWSTR                  pwszTimestampURL;
        CRYPT_ATTRIBUTES        *psRequest;
        SIGN_CALLBACK_INFO      *signCallbackInfo;
        SIGNER_CONTEXT          **ppSignerContext;
        CERT_STRONG_SIGN_PARA   *pCryptoPolicy;
        PVOID                   pReserved;

    If you’re curious about the methodology I use for figuring this out, I documented the process in the GitHub issue for APPX support. I rarely take the time to write down how I learned something, but for once I managed to think of my future self referring to it. Perhaps that is worthy of another post on another day.


    SignerSignEx3 with a signing callback seems to have one quirk: it cannot be combined with the SIG_APPEND flag, so it cannot be used to append signatures. This seems to be a limitation of SignerSignEx3, as signtool has the same problem when using /dlib with the /as option.


    It’s a specific API need, I’ll give you that. However, combining this with Subject Interface Packages, Authenticode is extremely flexible. Not only what it can sign, but now also how it signs.

    AzureSignTool’s source is on GitHub, MIT licensed, and has C# bindings.

  • macOS Platform Invoke

    I started foraying a bit in to macOS platform invocation with .NET Core and C#. For the most part, it works exactly like it did with Windows. However, there are some important differences between Windows’ native APIs and macOS’.

    The first is calling convention. Win32 APIs are typically going to be stdcall on 32-bit or the AMD64 calling convention on 64-bit. That may not be true for 3rd party libraries, but it is true for most (but not all) Win32 APIs.

    MacOS’ OS provided libraries are overwhelmingly cdecl and have a similar but different calling convention for AMD64 (the same as the System V ABI).

    For the most part, that doesn’t affect platform invoke signatures that much. However if you are getting in to debugging with LLDB, it’s something to be aware of.

    It does mean that you need to set the CallingConvention appropriately on the DllImportAttribute. For example:

        EntryPoint = "TS_REQ_set_version",
        CallingConvention = CallingConvention.Cdecl)

    Another point is that MacOS uses the LP64 memory model, whereas Windows uses the LLP64 for types.

    A common Win32 platform invocation mistake is trying to marshal a native long to a managed long. The native long in Win32 is 32bits, whereas in .NET it is 64-bit. Mismatching them will do strange things to the stack. In Win32 platform invocation, a native long gets marshalled as an int. Win32 will use long long or int64_t for 64-bit types.

    MacOS is different. It’s long type is platform dependent. That is, on 32-bit systems the long type is 32-bit, and on 64-bit it is 64-bit. In that regard, the long type is most accurately marshalled as an IntPtr. The alternative is to provide two different platform invoke signatures and structs and use the appropriate one depending on the platform.

    Keep in mind with MacOS, MacOS is exclusively 64-bit now. It’s still possible that one day your code will run 32-bit on a Mac as it is still capable of running 32-bit. At the time of writing even .NET Core itself doesn’t support running 32-bit on a Mac.

        EntryPoint = "TS_REQ_set_version",
        CallingConvention = CallingConvention.Cdecl)
    public static extern int TS_REQ_set_version
        [param: In] TsReqSafeHandle a,
        [param: In, MarshalAs(UnmanagedType.SysInt)] IntPtr version

    Using IntPtr for the long type is a bit of a pain since for, whatever reason, C# doesn’t really treat it like a numeric type. You cannot create literals of IntPtr cleanly, instead having to do something like (IntPtr)1.

    A final possibility is to make a native shim that coerces the data types to something consistent, like int32_t and have a shim per architecture.

    Another point of difference is string encoding. Windows vastly prefers to use Unicode and ANSI strings (W or A), where MacOS libraries will frequently use UTF8. The easiest thing to do here is to marshal them as pointers, unfortunately.

    Overall, it’s not too much different. Pay attention to the calling convention and be aware of LP64 over LLP64.

  • Peeking at RubyGems Package Signing

    I last wrote about NuGet signing for packages. This has been a hot topic for some folks in the approach that is being taken. However, signing packages was something I didn’t have a whole lot of data on. I didn’t have a good feel for how package communities adopt signing, and decided to get a little more information.

    I turned to the RubyGems community. Gems support signing, also with X509 certificates like the NuGet proposal. Support has been there for a while, so the community there has been plenty of time for adoption. This is on top of a high profile hack on RubyGems, giving plenty of motivation for developers to consider signing their packages.

    Problem is, there isn’t a whole lot of information about it that I could find, so I decided to create it. I decided to look at the top 200 gems and see where they stood on signing.

    The Gems

    The top 200 list is based off of RubyGems own statistics. One problem: their list by popularity only gives up to 100 gems. Fortunately, RubyGems doesn’t do such a hot job on validating their query strings. If I change the page=10 URL query string, supposedly the last page, to page=11, it is quite happy to give me gems 101-110. So first problem solved.

    Many of these gems are supporting gems. That is, not gems that people typically include in their projects directly, but rather included by as a dependency of another gem.

    Getting the latest version of each gem is easy enough with gem fetch. After building our list of gems, we just cache them to disk for inspection later.

    Extracting Certificates

    Certificates can be extracted from gems using gem spec <gempath> cert_chain. This will dump the certificate chain as a YAML document. We can use a little bit of ruby to get the certificates out of the YAML document and as files on disk.

    The Results

    I will be the first to admit that 200 gems is not a huge sample. However, they represent the most popular gems and the ones I would typically expect to be signed.

    Of the 200 gems specified, 17 were signed. That’s approximately 12% of gems. Initially I didn’t know what to think of that number. Is it good? Is it bad? If you had asked me to guess, I would have thought only three or four of them would have been signed. I don’t think 17 is good, either. It’s just not as bad as I would have expected it to be.

    The next matter is, what is the quality of the signatures? Are they valid? Are they self signed? What digest algorithms and key sizes are used?

    Of the 17 signed gems, two of them weren’t really signed at all. They contained placeholders for the certificate to go. Indeed, performing gem install badgem -P HighSecurity resulted in Gem itself thinking the signature was invalid. So we are down to 15 signed gems.

    Some other interesting figures:

    • 15/15 of them were self signed.
    • 2/15 of them used SHA2 signature algorithms. The rest used SHA1.
    • 4/15 were expired.
    • 8/15 used RSA-2048; 1/15 used RSA-3072; 6/15 used RSA-4096.


    I set up a GitHub repository for the scripts used to create this data. It is available at vcsjones/rubygem-signing-research. Everything that you need to extract the certificates from Gems is there.

    The gemlist.txt contains the list of Gems examined. The fetch.sh script will download all of the Gems in this file.

    extract_certs.sh will extract all of the certificates to examine how you see fit.


    It doesn’t seem like signing has really taken off with RubyGems. Part of the issue is that RubyGems simply doesn’t validate the signature by default. This is due to the default validation option in Gem being NoSecurity at the time of writing. Every single Gem that is signed would fail to install with the MediumSecurity trust policy:

    gem install gemname -P MediumTrust

    This will fail for one reason or another, usually because the certificate doesn’t chain back to a trusted root certificate.

    I’m not sure if this is indicative of how adoption will go for NuGet. I’m curious to see where NuGet is three years from now on signing.