• How to use SecureString in .NET

    Don’t. Probably.

    Okay, maybe I should elaborate. The SecureString class in .NET has been a source of a lot of questions on StackOverflow. It has the word “secure” in its name, “secure” seems good, so we should use SecureString!

    SecureString is a managed wrapper around the DPAPI APIs in Windows. The idea of SecureString is that it is a string that is in memory, considered sensitive, and you don’t want it to be exposed in memory leaks or a memory dump, or in any other scenario where a string in memory is undesirable.

    There are a number of problems with this though.


    Constructing a SecureString in the first place is difficult. A common question on StackOverflow is “How do I create a SecureString?” an invariably, an answer like this comes up:

    //Don't do this!
    var secureString = new SecureString();
    var password = Console.ReadLine();
    foreach(var c in password) {

    This copies the string into a SecureString. Well, except for the problem that the original string, password, is still in memory, clear as water. Since strings in .NET are immutable, “zeroing” them results in trying to do something ugly, like pin a managed string and zero it with unmanaged code by writing over it. In the native world, that is a perfectly valid thing to do. In .NET, that might not always be the case, such as if the string was interned. The CLR expected to be in control of a string’s memory, and stomping on it can do unexpected things.

    You can reasonably create a SecureString if you use Console.ReadKey in a loop, or have low-level access to a Win32 password box. WPF even goes so far as doing this out of the box for you by providing a SecureString property on PasswordBox.

    In web applications, it’s downright infeasible to use SecureString on content that came from a web browser. You couldn’t have a <input type=password> on a page and be able to put that in a SecureString, securely. The ASP.NET pipeline has done so much buffering, copying, and shuffling around of the request’s body that there is no practical way to remove it from the request body.

    So let’s say you are able to construct a SecureString, securely. Now what?


    There are very, very few APIs in the .NET Framework that know how to work with a SecureString. NetworkCredential is the big one, which is accepted in a few places, like LdapConnection, WebRequest, and a few others.

    NetworkCredential is actually interesting because it prefers storing things in SecureString, if the platform supports it.

    Even the .NET Framework can’t always use SecureString. It might just end up calling InternalGetPassword on the NetworkCredential, which will just make a managed copy of the string, anyway, left to be cleaned up by the garbage collector.

    SecureString does work well, relatively speaking, if you are working with an API that wants a string as a pointer. In that case, Marshal.SecureStringToGlobalAllocUnicode (substitute for the allocator of your choice) may be useful, as long as you call Marshal.ZeroFreeGlobalAllocUnicode when you’re done. This will almost never be the case. In these situations, you need to carefully put your SecureString back into managed memory, do something useful with it without it getting copied out of control, then clean everything up. This is what’s involved with say, hashing a SecureString.

    var secureString = new SecureString(); //Assume has password
    var buffer = new byte[secureString.Length * 2];
    var ptr = Marshal.SecureStringToGlobalAllocUnicode(secureString);
        Marshal.Copy(ptr, buffer, 0, buffer.Length);
        using (var sha256 = SHA256.Create())
            var hash = sha256.ComputeHash(buffer);
            //Do something useful with the hash
        } //Dispose on HashAlgorithm zeros internal state 
        Array.Clear(buffer, 0, buffer.Length);

    Here we copy the SecureString into native memory, copy it in to managed a managed byte array, hash it, then clear the managed array. Using Array.Clear with overwrite the contents of the array. This however implies that we trust what ComputeHash is doing with the contents of the array though. It could be creating copies of the byte array without us knowing about it. That depends on the implementation of SHA256. SHA256Managed does a reasonable job of not doing this, and disposing of the object will clear some internal arrays, also using Array.Clear.

    In case it wasn’t obvious, our secure string is naked during this process. It might only be a few moments, but that is the fundamental truth of SecureString. The only way to do anything useful with it is to copy it’s clear form in to memory for controlled, short periods of time. The practical use of this is it means memory has to be examined at just the right moment for it to be in cleartext.

    So now we have to think about what we’re protecting against. If we are trying to protect against a rogue process being run by the user, remember that a process cannot read another process’s memory without the PROCESS_VM_READ permission, which is usually reserved for elevated processes and debuggers.

    If the process is an adminstrator, then it’s already game over. SecureString is the least of your worry. With Administrative permissions, someone could simply just hook your process and inject code to call SecureStringToGlobalAllocUnicode anyway.

    None of this begins to even touch on other problems, like paging to disk. It is ever-so-slightly possible that the SecureString’s contents, while unprotected for those brief moments, gets paged to disk. Or the system gets hibernated and written to the hibernation file on disk. It’s much more difficult to expunge data from disks.

    To summerize, the security benefit offered by SecureString is very small in contrast to the level of effort to actually use it. There are probably much more sensitible things to focus this effort on that will yield much better protection for your users.

  • Re-examining HPKP

    Not too long ago I wrote about HTTP Public Key Pinning and adopting it on a website. Now that I’ve had the opportunity to help a few websites deploy it, I thought it would be worth re-visiting the subject and looking at what worked, and what didn’t.

    The first discussion that came up was deciding whether or not it is a good idea. It’s easy to say, “of course”. HPKP is a security feature, we want security, therefore we want HPKP. But HPKP comes with some costs. Many of those costs can be reduced by doing other things, but it boils down to having excellent posture around key management, process, and documentation. It’s easy enough to turn HPKP on a blog, but doing so with several members of operations, security, and developers, it is considerably more difficult. The penalties are unforgiving. At the worst, you may end up with a completely unusable domain. So before you jump right in and start hashing public keys, look at the long term viability of being able to do this, and build tools and process around it to make it work.

    Given that HPKP has considerably high risk of getting wrong, it’s worth getting a solid understanding of what it does, and does not, address. You may come to the conclusion that the risks outweigh the benefits, and time should be better spent on other ways to improve security.

    Deciding to move forward, there are a number of things that needed to be discussed. The first thing that came up was what to pin. Some suggest pinning an intermediate certificate, while others suggest pinning a leaf. My recommendation here is pin only what you control. For most people, that means the leaf. For very large organizations, you may have your own intermediate certificate. Some recommend pinning a CA’s intermediate to reduce the risk of losing keys. In this scenario, you would just need to re-key your certificate from the same certificate authority. The downside to this is CA’s deprecate intermediate certificates, and there is no guarantee they’ll use the same key in a new intermediate certificate. If you do decide to pin an intermediate, I would recommend one of your backup pins be for a leaf.

    Then there was the matter of backup pins. User agents require that a backup pin is available before it will enforce pins. I would recommend more than one backup pin, and providing some diversity in the algorithm that is used as well as the key size. For example, if I intended to pin an RSA-2048 key, my backup pins might be another RSA-2048, and an ECDSA-P256. The different algorithm gives you an option to immediately move to a different algorithm in the wake of a discovery, such as finding out that RSA is broken, or that the NIST curves in P256 have weaknesses. Even if nothing like that happens, which it probably won’t, it also gives a straight forward path to increasing key sizes, which is a natural thing to do over time.

    Having a backup pin for the same algorithm allows recovery from the loss of a key, or exposure of the private key without changing the algorithm used. Moving from one algorithm to another, like RSA to ECDSA, will carry some compatibility risks with older clients. Having a backup pin of the same key length and algorithm at least ensures you can recover without the additional burden of investigating compatibility.

    Lastly there was the matter of testing backup pins. I strongly recommend using Report-Only first when deploying HPKP, and testing a failover to each and every backup pin. While doing this, I ran in to a situation where a backup pin wasn’t working. It turned out that the SHA256 digest of the SPKI was actually a digest of the string “File not found”.

  • ECDSA Certificates and Keys in .NET

    It’s not uncommon to need to sign something with the private key of a certificate. If you’re using RSA or DSA certificates, that’s been a fairly straight forward process with the .NET Framework. If your certificate was an ECDSA certificate, this was not a straight forward process. You often had to fall back to p/invoke using CryptAquireCertificatePrivateKey to obtain an NCrypt CNG key handle.

    In the .NET Framework 4.6, this got a whole lot easier with the extension method GetECDsaPrivateKey.

    I did run in to a problem with it though. I was getting an exception:

    System.ArgumentException: Keys used with ECDsaCng algorithm must have an algorithm group of ECDsa.

    I did a lot of double checking of the certificate, yes the certificate had an ECC key in it and the algorithm parameters explicitly defined the P256 curve for ECDSA. What gives?

    I decided to fall back to old tricks and use CryptAquireCertificatePrivateKey to create an instance of CngKey, which then I would pass to ECDsaCng so I could sign something.

    This, also, failed when passing the CngKey to the constructor of ECDsaCng.

    Upon examining the CngKey instance itself, CNG believed the key was ECDH, not ECDSA. This was getting bizarre. Strangely enough, I had another certificate where this worked perfectly fine and CNG was happy to announce that the algorithm was ECDSA.

    ECDH and ECDSA keys are interchangeable. You probably shouldn’t use the same key as a key agreement (ECDH) and signing (ECDSA), but ultimately they are just points on a curve. Yet somehow, CNG was making a distinction.

    We can throw out the certificate itself being the source of the problem. If I opened the private key by name, it still believed the key was for ECDH. Clearly, this was an issue with the private key itself, not the certificate.

    The cause of all of this mess turned out to be how the CNG’s key usage gets set. Every CNG key has a “key usage” property. For an ECC key, if the key is capable of doing key agreement, CNG decides that the key is ECDH, even though the key is also perfectly valid for signing and verifying.

    Now the question is, how do we set the key usage? Key usage needs to be set before the key is finalized, which means during creation. It cannot be changed once NCryptFinalizeKey has been called on the key.

    My certificate and private key were imported as a PKCS#12 (.pfx) file through the install wizard. It’s during this process that the key’s usage is getting set.

    After a bit of trial and error, I determined that setting the keyUsage extension on the certificate does not matter. That is, if the keyUsage extension was marked critical as set to signature (80), the CNG key would still get imported as AllUsages.

    Eventually, a lightbulb came on and I examined the PKCS#12 file itself. It turns out that the PKCS#12 file was controlling how the private key’s usage was being set.

    A PKCS#12 file contains a number of things, one of them is a “key attributes” property. If you use OpenSSL to create a PKCS#12 file from a certificate and private key, OpenSSL won’t set the key attributes to anything by default. If you create the PKCS#12 file with the -keysig option then the import wizard will correctly set the key’s usage. If you create the PKCS#12 file with Windows, then Windows will preserve the key usage during export when creating a PKCS#12 file.

    Let’s sum up:

    If you have an ECDSA certificate and private key and you create a PKCS#12 file using OpenSSL, it will not set the key attributes unless you specify the -keysig option. So to fix this problem, re-create the PKCS#12 file from OpenSSL with the correct options.

    Alternatively, you can wait for the .NET Framework 4.6.2. In this version of the framework, the ECDsaCng class is happy to use a ECDH key if it can. This is also the only option you have if you really do want to have a key’s usage set to ‘AllUsages’.

  • Moving to Static Content

    If my site is looking a little different today, that’s because I’ve redone it from scratch. Gone is WordPress, gone is PHP.

    Like many others, I’ve started using a static site generator, in this case Jekyll. Static content makes a lot more sense, and a lot of things I wanted to play around with on my previous blog I didn’t get to do because WordPress fought me most of the way.

    Things are simpler here. There is no JavaScript. I’ve abandoned any kind of in-page analytics because I don’t value it more than I value other people’s privacy. Here, all we have is static HTML and CSS.

    No JavaScript, dynamic content, or assets from other domains means I can have a plain and simple Content Security Policy, which I effectively couldn’t do with WordPress due to the mess of inline CSS and JavaScript that were thrown around.

    It also means I can enable brotli on everything.

    Finally, there is a real deploy process for this. No more manually crushing images and creating WebP variants of the image by hand. This all happens automatically, behind the scenes.

    Making it Work

    The site’s content is now on GitHub. On commit, GitHub notifies AWS CodeDeploy, which pulls down the repository to the EC2 instance and kicks off the build. It starts as a gulp task, which runs Jekyll, then compresses images and creates WebP copies. The repository also contains the NGINX configuration, which CodeDeploy copies to the correct location and then reloads NGINX.

    AWS CodeDeploy works pretty well for this. It’s a tad difficult to get started with, which was a bit discouraging, but after reading the documentation through a few times it eventually clicked and I was able to get it working correctly.

    The migration has left some things missing, for now, such as comments, but eventually I’ll bring those back.

  • Authenticode Stuffing Tricks

    I recently started a project called Authenticode Lint. The tool has two purposes. The primary one being, “Am I digitally signing my binaries correctly?” and two “Are other people signing their binaries correctly?”

    To back up a bit, Authenticode is the scheme that Microsoft uses to digitally sign DLLs, EXEs, etc. It’s not a difficult thing to do, but it does offer enough flexibility that it can be done in a sub-optimal way. The linter is made up of a series of checks that either pass or fail.

    When you sign a binary, the signature is embedded inside of it (usually, there are exceptions). The goal of the signature is to ensure the binary hasn’t been tampered with, and that it comes from a trusted source. The former presents a problem.

    If I were to take a binary, and computer a signature on it to make sure it hasn’t changed, then embed the signature in the binary, I just changed the contents of the binary and invalidated the signature I just computed by embedding it.

    To work around this problem, there are some places inside of EXEs that the digital signature process ignores. The notable one being the place that signatures go. So the section that signatures go is completely ignored, as is the checksum of the file in the optional header.

    Now we have tamper-proof binaries that prevent changing the executable after its been signed, right?

    Ideally, yes, but unfortunately, no. There are some legitimate reasons to change a binary after its been signed. Some applications might want to embed a per-user configuration. Re-signing the executable on a per-user basis is to costly in terms of time and security. Signing is relatively fast, but not fast enough to scale reasonably. It would also mean that to perform the re-sign, the signing keys would need to be available to an automated system. That’s generally not a good idea, as a signing key should either be on an HSM or SmartCard and always done by one (or more if using m/n) person manually.

    It turns out it is possible to slightly modify an executable after its been signed. There are a few ways to do this, and I’ll cover as many as I know.