Platform Invoking Nullable Value Types

I spend an inordinate amount of time writing platform invoke in my free time. This largely has to do that our products over at Thycotic work with a whole myriad of 3rd party systems, who’s only API is in C, or if we need to call down into a more specialized Windows API in Win32. I’ve learned a few things over the years doing this, and the one that constantly bugs me is marshaling value types that can be null. Consider this C signature:

		ULONG *cbBuffer,
		PUCHAR pbBuffer,
		DWORD dwFlags

The documentation for said function is something like this:

“cbBuffer is the size of the buffer given to pbBuffer. If this parameter is NULL, the buffer is not filled.”

This cbBuffer parameter puts us in an interesting position. If we make the platform invoke signature for it ref int, that makes it impossible to pass NULL into it and we are forced to give in a buffer for something we don’t really even want. There are a few ways to work around this.

Marshal as IntPtr

This isn’t a very good solution but I see it from time to time. This lets us pass IntPtr.Zero when we want to pass NULL, but this makes it difficult to actually pass in a real value. Assuming the platform invoke signature looked something like this:

internal static extern int CFoo(
  IntPtr cbBuffer,
  IntPtr pbBuffer,
  uint dwFlags

The only “safe” (not using unsafe C# constructs) to do this is to allocate memory for an integer, write the integer, and remember to deallocate the memory when we are done:

var cbBuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf<int>());
Marshal.WriteInt32(cbBuffer, unchecked((int)0xABAD1DEAU));
    IntPtr myBuffer = /* .... */;
    CFoo(cbBuffer, myBuffer, 0U);

This is unfortunately rather error prone and cumbersome.


The solution I prefer is to overload the platform invoke signature:

internal static extern int CFoo(
  IntPtr cbBuffer,
  IntPtr pbBuffer,
  uint dwFlags

internal static extern int CFoo(
  IntPtr cbBuffer,
  ref int pbBuffer,
  uint dwFlags

This works quite well. When I want to give / receive a value for pbBuffer, I use the ref someValue overload. When I don’t, I use the IntPtr overload and give it IntPtr.Zero.

This solution works well since I have not yet given in to unsafe code, and the marshaller is smart enough to do this for us.

Unsafe Declarations

A way to unify these two into a single unsafe declaration would look like this:

internal static extern unsafe int CFoo(
  int* cbBuffer,
  void* pbBuffer,
  uint dwFlags

That allows passing null into cbBuffer, or using a pointer to it like so:

int frob;
CFoo(&frob, null, 0);

This actually allows for the cleanest declaration and consumption of the platform invoke method, at the cost of allowing unsafe code regions.

Ideally, there would to a way to do this without resorting to unsafe code or overloading. Hopefully in the future we’ll be able to do something like this:

ref int refValue = null;
CFoo(refValue, null, 0);

ref int refValue = 42;
CFoo(refValue, pbBuffer, 0);

Put that one high up on my list of C# features I’d like to have.

Some thoughts on HTTPS and Certificates

Over at the Thycotic blog I’ve written two pieces (so far) about doing more with SSL/TLS on web applications. While I tried to keep things short and to-the-point there and offer as little commentary as possible, I wanted to delve more into my own thoughts on HTTPS certificates.

I’ll admit that fine-tuning a server’s TLS protocols and ciphers is not usually the most immediate thing for people to do. Doing more common, “low hanging fruit” items is the place to start. Closing unused ports, routine patching, following best practices with passwords (or getting rid of passwords altogether), and so are easier to do, and have relatively low-impact when done correctly. Nevertheless, securing content on a server is not a set-it-and-forget it kind of thing, and managing SSL is not the most straight forward of things, either.

While passwords and their management are hot trends in InfoSec, certificates should be right along side there, which is what lead me to write those posts (and there are more on the way.) We often hear the following when discussing passwords:

  • Who has access to account passwords (such as a privileged account)?
  • Do you have a centrally located secure vault for your passwords?
  • Who has access to that vault?
  • Do you have fine-grain control over who has access to these passwords?
  • How do you handle the scenario when an employee or contractor leaves? Do you know what passwords he knew? How do you change them?

I think we are starting to see an interesting point in HTTPS certificates life: we are at the point where major changes are needed, and people are just starting to realize these changes are hard, as we are finding out when HeartBleed forced people to react quickly to their certificates. We hit that point a while ago with passwords. We’ve seen a steady rise in password complexity requirements, Two-Factor is no longer corporate, and we have a large market to manage passwords across sprawling environments.

Particularly with SSL certificates, I don’t see that sense of urgency in changes like we do with passwords. That probably has something to do with the fact that SSL certificates themselves are relatively safe at this point. We know an eight letter password can be broken with HashCat in just a few minutes, but we don’t have anything as demonstrable on the certificate side, but we’re getting there. MD5 is suppose to be a no-no with certificates these days, yet they are everywhere. One of the bigger problem we have are these multi-decade valid root certificates that were created in 1998. MD5 and RSA 1024 was pretty good then. Today, not so much. Unlike a password, changing a root certificate is an enormous undertaking. To update a root certificate, everyone needs to get the new root. Microsoft does this regularly through Windows Update, Linux distros regularly adds certificates as well, but the fact is not everyone does get these updates, and replacing a root certificate is going to break some non-trivial percentage of the web.

So I ask, where are these questions being talked about in regards to certificates?

  • Who has access to your certificate private keys?
  • Do you have a centrally located certificate authority to managed these private keys?
  • Who has access to the authority?
  • Do you have fine-grain control over who has access to these private keys?
  • How do you handle the scenario when an employee or contractor leaves? Do you know which private keys he knew? How do you revoke them?

I know that these questions are being asked, but frankly I don’t see this discussion coming up enough. Once HTTPS / TLS is working in an environment, I think most are just washing their hands of it and don’t worry about it until the certificate expires.

It’ll be real interesting over the next few years to see where these PKI problems go.

Working with CNG Keys

A recent StackOverflow question highlighted that the new Cryptographic Next Generation API has some interesting features around managing their keys, and some confusion around them.

The CNG algorithms have identical purposes to their non-CNG (CryptoAPI) counter parts. The difference boils down to their implementation. CNG completely separates the algorithm from the key generation and persistence. This provides a lot more flexibility to algorithms based on the key’s storage location and generation. For example, the key may be stored in a Smart Card or a HSM, however the algorithm itself is unaware of all those details. The key has a known storage location, called a KSP, which could be one of those locations.

In .NET, keys managed or created by CNG APIs are used with the CngKey class. This class can be used to create, import, export or open an existing key.

Creating a ECDH key is fairly straight-forward using CngKey.

var ecdh = CngKey.Create(CngAlgorithm.ECDiffieHellmanP256,

The CngAlgorithm class specifies what algorithm the key will be used for. In this example, we used ECDH on the P-256 curve.

At this point you have a key, and you can use it a few different ways. The first being, pass it to the constructor of ECDiffieHellmanCng and use the algorithm, for signing as an example.

You can also Export the key. For public keys, this is a pretty easy operation.

var publicKey = ecdh.Export(CngKeyBlobFormat.EccPublicBlob);

Exporting the private key is a different matter. Originally, when the KSP creates the key, it’s marked as non-exportable. It cannot leave the KSP. This is generally a good practice when handing private keys. However, exporting the private key requires that the key be marked as exportable before it is created in the KSP. That’s an additional parameter when creating the CngKey.

var ecdh = CngKey.Create(CngAlgorithm.ECDiffieHellmanP256, 
    new CngKeyCreationParameters
		{ ExportPolicy = CngExportPolicies.AllowPlaintextArchiving});

This will allow passing EccPrivateBlob to export. If the key doesn’t allow plaintext archiving or exporting, an exception will be thrown when attempting to export.

Note: The MSDN Documentation for the “None” Export Policy appears wrong. It says “None” means there are no restricting. It actually means “The private key cannot be exported, there are no restrictions on exporting the public key.

The difference between exporting archiving and exporting is the number of times it can be exported. Archiving allows Export to be called once. Export allows it to be called multiple times.

You may not need to export the key to use it later though. In the above examples, we created ephemeral keys. These keys do not persist – they are deleted the moment they are disposed. Creating a non-ephemeral keys is easy. Instead of passing “null” as the key name, specify a key name.

CngKey.Create(CngAlgorithm.ECDiffieHellmanP256, "MyKey");

This key will be persisted to the storage. Later, even after the machine is rebooted, the key can be opened by calling CngKey.Open("MyKey");. This allows persisting the private key security without it ever have been exposed in plaintext.

Using non-ephemeral keys in conjunction with archiving is good way to create a key, allow a user to back it up once, and store it in the KSP for later use. The user could later import the key if they move to another machine or need to restore it from backup.

So where are these keys stored? This all depends on the KSP being used. In our examples above, they are stored in the Microsoft Key Storage Provider, which lives on the computer and is managed by the operating system. However CNG allows storage in any registered KSP available. Another option might be a Smart Card. You can create a CngKey on a Smart Card by specifying the CngProvider.

	new CngKeyCreationParameters
   		Provider = CngProvider.MicrosoftSmartCardKeyStorageProvider

When this code is run, I am prompted to insert a smart card into my smart card reader.

Smart Card Reader

Inserting a smart card that supports ECDH will persist the key onto the smart card. Likewise, I could specify an HSM if the HSM is a valid KSP by specifying the name:

new CngProvider("MyHSMKSP");

CNG is a very powerful cryptographic API that makes extensibility much easier. Using CNG makes key management much safer.

One place the CNG falls short in .NET is using symmetric algorithms with CNG, such as AES. This is rather a shame, since it would allow use of other AES modes of operation, such as AES-GCM which are available in Windows 8 today. CNG implementations of AES are available with 3rd party assemblies on CodePlex.

On the asymmetric side, CNG falls short with RSA. DSA, and Diffie-Hellman are supported however.

Making a self-signed SSL certificate (and trusting it) in PowerShell

I recently came across a new PowerShell cmdlet for creating self signed certificates in PowerShell:

$cert = New-SelfSignedCertificate -DnsName localhost, $env:COMPUTERNAME
	-CertStoreLocation Cert:\LocalMachine\My

This cmdlet is actually really helpful since it lets you specify subject alternative names in the certificate as well. The DnsName parameter uses the first value for the Common Name, and all of them for SANs. Since the cmdlet returns an X509Certificate2 object, it’s easy enough to use in other .NET API’s as well, such as adding it to the Trusted Root Authority:

$rootStore = New-Object System.Security.Cryptography.X509Certificates.X509Store -ArgumentList Root, LocalMachine

And even creating an SSL binding to an IIS Web Site:

New-WebBinding -Name "Default Web Site" -IPAddress "*" -Port 443 -Protocol https
pushd IIS:\SslBindings
$cert | New-Item!443

This is a handy little Cmdlet, though it lacks some flexibility, such as specifying the validity period and RSA key size (though the default 2048 is just fine).

Compiler warnings are your friend

I’m a big fan of tooling to speed of development. I’m also an advocate of ensuring that your tools aren’t a crutch – more of a performance enhancer. Generally, tools such as IDEs, compilers, and code generation, are there to save time and effort – as long as you understand what they are doing for you.

That’s why I’m always surprised when people ignore the help their tool is trying to give them. A particular one for me is compiler warnings, and people or teams having hundreds or thousands of compiler warnings.

General responses I get when asked about it are, “Oh, that’s just the compiler complaining, it’s safe to ignore them.” Problem with that though, is it makes it impossible to find genuinely helpful compiler warnings in the sea.

557 Compiler Warnings

There are lots of ways to solve the problem with compiler warnings.

One option is to just fix the issue the compiler is warning about. The compiler warning is making a suggestion, and sometimes (or most of the time) it’s right – so fix the problem. If the compiler tells you a region of code is unused, then you can remove it safely. These are always the most helpful warnings and why you want a pristine warning list. Often enough, the compiler will catch something that’s easy to gloss over, such as a double free, precision loss when casting numeric types, and the like.

Warnings can be ignored, too. Some warnings, you or your team might just not find helpful, or produce more noise than help. Any good compiler comes with a way to disable warnings, either with a code pragma for certain regions of code, or a compiler flag to completely disable the warning. One that comes up for me often enough is in some places compiler directives are used for DEBUG builds, which can confuse Visual Studio about code that will never run.

#pragma warning disable 0162
	return TimeSpan.MaxValue;
	return new TimeSpan(0,5,0);
#pragma warning restore 0162

Normally, Visual Studio would give a compiler warning that the second return value is unreachable while in debug configuration.

I try to avoid this kind of code in the first place – but sometimes it cannot be helped. Various other compilers support a similar notion, for example Clang:

#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wunused-variable"


#pragma clang diagnostic pop

Sometimes compiler warnings are introduced with no clear plan on how to clean the up, and they’ll sit there for ages. A good example might be obsoleting a method or function that is called in hundreds of places.

[Obsolete("This is obsolete and should not be used.")]
public void ObsoleteMethod()

I’m generally not a fan of obsoleting methods, except in the rare case that you’re an API or SDK provider and you need to inform consumers of your library. Otherwise, favoring refactoring and completely removing the method is a better option.

In some circumstances some warnings might be useful as actual errors, too. Security warnings are generally some that I’ve taken to escalating to a full error, such as using strcpy or Objective-C’s violation of ‘self = [super init]‘ rule.

In all, I see compiler warnings often ignored when they provide tremendous value to you or a team. It can be a bit tedious to keep compiler warnings “clean”, but it’s well worth the effort.

Investigating IIS 8.5 Features in Server 2012 R2 Preview

Windows Server 2012 R2 hit MSDN Subscribers on Tuesday. I’ve now just gotten the time to download it and play around with it, and look for new things. Besides the the obvious, I noticed that Internet Information Services 8.5 made its appearance.

IIS 8.5

Suspended AppPools

I started poking around looking for new features, and there were a few things that caught my eye. The first is a new setting on AppPools call “Idle Time-out Action”. I haven’t been able to find out a lot of information about this online yet. It has two options, “Terminate” and “Suspend”. “Terminate” is how AppPool timeouts are handled now. If the AppPool is inactive beyond the threshold, the w3wp.exe process(es) exits. Under Suspend, it appears that the process keeps running after the timeout, however it releases most of its resources. However what I did notice is that the AppPool is able to service requests much faster once you wake it.


The implications of this aren’t clear, I will continue to toy around with it to better understand how it works. What I am unclear about is how this may or may not affect AppDomains from .NET loaded into the worker process. Do the AppDomains get unloaded? That I am not sure of.

The CLR Version

They’ve also made a small cosmetic, though overdue change. AppPools no longer call it the “.NET Framework Version”, now it’s call the “.NET CLR Version”.

Server 2012

Server 2012 AppPool

Server 2012 R2

Server 2012 R2 AppPool


The distinction is important, and has caused a lot of confusion about how IIS should be configured. When the .NET Framework 3.5 came out with Visual Studio 2008, many people had assumed they needed to change their AppPools to the .NET Framework 3.5 in IIS if they developed a website using .NET 3.5. That’s not the case. The Application Pool has only ever cared about the CLR version of that framework. .NET 3.0 and 3.5 were built on the 2.0 CLR, as well as 4.5 still uses the 4.0 CLR. This is a long overdue change that I welcome.

Logging to ETW

IIS 8.5 also has improved logging functionality with the ability to log to Event Tracing for Windows (ETW), in addition to the classic IIS logging. This will help unify logging across the board for Administrators and better central management of IIS’s logs.

ETW Logs

Overall IIS 8.5 isn’t a huge change from IIS 8, though it has some nice goodies. There are probably also things that I missed. If I did, let me know!

Aquarium and Aquatic Hobbyist site for StackExchange

I’m a big fan of StackOverflow, and the StackExchange community in general. There is currently a site proposal for Aquarium and Aquatic Hobbyist

This is a family hobby, so if you have interest in this, please help us get this site off the ground by following it and voting for questions.


It looks like this StackExchange proposal wasn’t accepted. I’m disappointed, and hopefully we’ll get an opportunity to try again in the future.

First Impressions of Azure

A few days ago, I finally got time to sit down and spend a few hours with the Azure platform. After using it for roughly 4 days, developing and tweaking a project I am working on, I wanted to share my experience so far.

Most, if not all of my personal projects are running on the AWS platform (including this blog). From DNS with Route 53, to CDN with CloudFront, I’ve invested a lot of time into Amazon. All of my projects so far have always run on a virtual machine that I had full reign over. The project I am currently working on seemed to fit neatly into a hosted application. The appeal to an application as opposed to a full virtual machine is how easy Cloud providers like AWS make it to scale. Scaling with virtual machines requires a virtual network, a load balancer. The selling language I kept hearing from AWS is it just works.

My project was going to be a learning experience with Node.js (another topic for another day), and the natural thing I jumped to was to put this in AWS’s Elastic Beanstalk.

Though I couldn’t help but keep looking at Microsoft’s offering, Azure Web Sites. Azure makes a 3 month trial pretty easy to do, so I thought I’d try both, compare, and stick with whichever I found suited my needs more.

Getting Started

Getting started with Elastic Beanstalk and Azure is both simple. Click a button, and both spin up a web site / application for you. Azure was fast though, so I started there. Immediately Azure had a huge advantage. I can use Git to deploy. I’m a long time lover and user of Git, and the ability to git push azure master is huge. I decided to start where anyone would, and create a hello world node.JS site:

var http = require('http');
var port = process.env.PORT || 8080;
var server = http.createServer(function (request, response) {
  response.writeHead(200, {"Content-Type": "text/plain"});
  response.end("Hello World");


Satisfied with my results on localhost, I did a git init, added the azure remote, and git pushed. Yeah, it was that easy. My Node.js app was available on Azure.

Elastic Beanstalk was still creating my application.

The Good and Bad

Finally though, Beanstalk was done creating my application. From there though, I didn’t really know what to do. It wasn’t as simple as a git push. Beanstalk required me to zip up all my files, taking care not to include things I didn’t care about like my .git directory, and upload it.

While I am sure Beanstalk provides an API like all other Amazon services, this just didn’t feel right. At this point I am ready to call a cloud platform that doesn’t allow publishing content via source control a deal-breaker. Azure so far has a clear lead over Beanstalk for me, though both have their pluses and minuses when I started to peer under the covers of both a bit more.

Beanstalk gives you quite a bit of flexibility. It is built on their AWS platform, and it doesn’t try to hide that fact. It lets you configure finer details of how the load balancer works, the security groups, and even the virtual machines the application is running on, to an extent. Beanstalk also pushes a lot of the building and deploying locally to you, which might be more desirable if you need to deploy to a local test environment, first. If you have an existing build process, Beanstalk is quite friendly.

Azure does all of the work for you, from deployment to compiling your project (if it requires it). This left me worried, “What do I do when I need to customize some aspect of the deployment?” The answer though is wonderful. Azure’s deployment mechanism is called Kudu, and it’s pretty easy to customize. You can include the deployment script in your repository. If you don’t have one in there, it just uses a default. Customizing it is easy though. With the Azure CLI Tools installed, run azure site deploymentscript --node in the root of your git repository and it will create the default deployment script. Customizing it is as easy as modifying the deploy.cmd file.

So for now, I am going to stick with Azure for Web Sites. I still like the AWS platform, but Azure flat out worked better for me, and with Node.js of all things. Thinking what Azure was 2 years ago, they are moving extremely quick to keep it a contending platform. Beanstalk feels like an afterthought to me, while Amazon focus on their extremely enterprise features like Redshift. I haven’t done a cost analysis between the two to determine which would be more cost effective, though a quick glance tells me they should be pretty comparable.

Using ECC for an SSL Certificate

Recently I’ve been toying with the idea of using ECCDSA instead of RSA for SSL certificates. Using an ECC key of 256 is approximately as strong as a 3072-bit RSA key, which is what drew me towards them. However I found it a little difficult to get the Certificate Authority to issue the right kind of certificate. Eventually I got it working using CertReq.exe, here is the INF I used to generate the certificate.

Signature="$Windows NT$"

Subject = "CN=yourcommonname"
Exportable = FALSE
KeyLength = 256
KeyUsage = 0xA0
MachineKeySet = TRUE
KeySpec = 0
ProviderName = "Microsoft Software Key Storage Provider"
KeyAlgorithm = "ECDSA_P256"
HashAlgorithm = "SHA256"


%szOID_SUBJECT_ALT_NAME2% = "{text}dns=domain1&dns=domain2"

CertificateTemplate= WebServer

With this template I was issued a ECDSA_P256 certificate, which is exactly what I wanted. The usage of a SAN is optional, however it I needed to specify it as well, so I left it here.

– “Case for Elliptic Curve Cryptography”, NSA <​programs/elliptic_curve.shtml>

Scaling WordPress Part 2: CDN

Last time we looked at the performance of my site, we discovered that the DNS was a source of the problem due to slow resolution time. In this next part, we’ll try and tackle two things at the same time.

A CDN and Cookie-Free Domains

I’m doing this a bit out-of-order compared with how I actually configured my server, this I’m presenting this in a way that will require the least amount of backtracking. If setting up a CDN seems like too much work, you can skip this one and wait for the next part (GZipping).

One of the things YSlow gave me an “E” on (wow, worse than an F) was cookie-free domains. I’m using Google Analytics to track visitors on my site, and the way it accomplishes this is with cookies.

Cookies are a perfectly valid thing these days, however setting a cookie for my domain,, meant that all static content like CSS, JavaScript, and images were sent with the cookie. Analytics was setting cookies when someone hit my site, and the web server was happy enough to send along the cookie header with this static content.

The typical solution for this is to use a cookie free domain, i.e. use a different domain for your static content. I couldn’t use a subdomain like because cookies set at also apply to all child domains. My only option to using a cookie free domain was to purchase another domain and serve static content from there, like

However, before I did any of that, YSlow suggested I use a Content Delivery Network, or CDN, for my static content. If I moved my static content to a CDN, then I would be taking care of the cookie free domains problem as well.

On the theme of using Amazon for everything, I settled on giving CloudFront a shot. CloudFront is Amazon’s content delivery network solution.

I had a couple of choices on how I wanted to set this up.

Host my static content elsewhere (bad)

I had originally gone the route of trying to move all of my static content to Amazon’s S3 solution. CloudFront is easy to configure to serve content from an S3 bucket, but this turned out to be a troublesome approach from a maintenance standpoint. I would not only need to move all of my uploads to S3, but also the theme content, and other “innards” of WordPress. This would be difficult keeping things up-to-sync. I wasn’t able to find a WordPress plugin that could keep all of that in sync for me, and manually uploaded content to S3 seemed tiresome. It would also mean any time I upgraded a plugin, theme, or WordPress itself, I would need to move all of those files to S3 again. This didn’t seem like a workable approach.

Leave my static content as is (good)

I could leave my static content exactly where it is, and set up CloudFront to use my own server as a source of static content. This seemed like the best approach. If my file content changed, the CDN would pick it up (eventually). Any new files would instantly be picked up by CloudFront, and I wouldn’t have to change where WordPress’s physical files were. That would better for upgrading and plugin changes.

Making it all work

I had initially set up CloudFront to point to That worked well enough, CloudFront served anything from my server in its edge cache. This however, left me with a bit of a yucky feeling: any dynamic content could possibly end up in CloudFront’s cache. I wasn’t a big fan of that, so I decided to truly separate my static content from my dynamic content, even if they were all in the same place.

I first brought up a new subdomain, I had it pointed to the same physical location as At first, this was a true 100% copy of My next intention was to configure nginx, my server, to only serve static content from, and block it from As a last step, I would configure CloudFront to use as an origin.

Configuring NGINX

NGINX is a great server, and I love its flexibility and speed. My static content virtual site configuration looked like this:

server {
    listen          80;
    root            /var/www/wordpress/;

    location ~* \.(?:js|css|png|jpg|jpeg|gif|ico)$ {
        expires max;
        add_header Pragma public;
        access_log off;

    location / {
        return 404;

This is an abbreviated version of what I currently have running now. So will only serve static content, which helps mitigate the possibility of an search engine finding this domain somehow and dinging my SEO rankings for serving duplicate content.

So, is being used to serve dynamic content, is used to serve only static content, and my CloudFront CDN is using

Finally, I blocked static content from with a few exceptions.

location ~* \.(?:js|css|png|jpg|jpeg|gif|ico)$ {
    if ($http_referer ~ "wp-admin") {
    if ($request_filename ~* jquery\.js$) {
    return 404;

This blocks static content from unless it is jquery, or if the referer contains wp-admin.

jQuery is a bit of a special beast in WordPress. It doesn’t appear to be easy to cache via a CDN because WordPress dynamically builds it depending on what the extensions ask for.

wp-admin is whitelisted as a referer because we don’t want to break the admin section of WordPress, which appears to be a little more difficult to fix URLs in.

Fixing URLs in WordPress

So now with this fancy-pants CDN, I actually needed to use it. For now, I am using two WordPress plugins to accomplish this.

CDN Rewrite works well to rewrite content from WordPress’s themes and includes. This changes my theme to load its CSS, images, and JS from CloudFront. The exception to this is jQuery. jQuery in WordPress is an odd beast, it appears that WordPress dynamically builds jQuery depending on what pieces of it is needed. For that purpose, jQuery is not served over my CDN to my site, which is rather unfortunate (though it is gzipped).

Real Time Find and Replace works well to rewrite content in actual post bodies, that CDN Rewrite doesn’t seem to do.

My intention is to eventually eliminate both by fixing the actual links in post bodies, and creating a child theme. This will be a small improvement such that WordPress has less processing to do on the rendered HTML.

Due to the odd handling of jQuery in WordPress, neither of them catch the URL for jQuery, so jQuery is served outside the CDN, which is actually rather unfortunate.

CloudFront CNAME

One option CloudFront has is the ability to specify a CNAME in your DNS to CloudFront. I initially took this approach because it allowed more control over where my static content was located. If I use my CDN endpoint,, that would mean if I spent all that time fixing static content URLs, I would have to fix them again if I opted to not use CloudFront. Initially I had setup to point to CloudFront, but because it is a subdomain of, it was getting Google Analytics cookies. Instead I opted to just use CloudFront’s domain. This keeps my static content cookie free, and also means I won’t pay for DNS lookups in Route 53.


This CDN approach works well for me, with a few issues that are acceptable trade offs, or is easy to work around.

Because I disabled static content on to better separate the static content, the Administration bar is a little broken when used outside of wp-admin.

CloudFront has no easy to purge all of its cache. If you need purge something from CloudFront, you need to specify the path you want to purge, including the query string.

More of this series

  1. Scaling WordPress Part 1: DNS
  2. Scaling WordPress Part 2: CDN