Working with CNG Keys

A recent StackOverflow question highlighted that the new Cryptographic Next Generation API has some interesting features around managing their keys, and some confusion around them.

The CNG algorithms have identical purposes to their non-CNG (CryptoAPI) counter parts. The difference boils down to their implementation. CNG completely separates the algorithm from the key generation and persistence. This provides a lot more flexibility to algorithms based on the key’s storage location and generation. For example, the key may be stored in a Smart Card or a HSM, however the algorithm itself is unaware of all those details. The key has a known storage location, called a KSP, which could be one of those locations.

In .NET, keys managed or created by CNG APIs are used with the CngKey class. This class can be used to create, import, export or open an existing key.

Creating a ECDH key is fairly straight-forward using CngKey.

var ecdh = CngKey.Create(CngAlgorithm.ECDiffieHellmanP256,
	null);

The CngAlgorithm class specifies what algorithm the key will be used for. In this example, we used ECDH on the P-256 curve.

At this point you have a key, and you can use it a few different ways. The first being, pass it to the constructor of ECDiffieHellmanCng and use the algorithm, for signing as an example.

You can also Export the key. For public keys, this is a pretty easy operation.

var publicKey = ecdh.Export(CngKeyBlobFormat.EccPublicBlob);

Exporting the private key is a different matter. Originally, when the KSP creates the key, it’s marked as non-exportable. It cannot leave the KSP. This is generally a good practice when handing private keys. However, exporting the private key requires that the key be marked as exportable before it is created in the KSP. That’s an additional parameter when creating the CngKey.

var ecdh = CngKey.Create(CngAlgorithm.ECDiffieHellmanP256, 
    null,
    new CngKeyCreationParameters
		{ ExportPolicy = CngExportPolicies.AllowPlaintextArchiving});

This will allow passing EccPrivateBlob to export. If the key doesn’t allow plaintext archiving or exporting, an exception will be thrown when attempting to export.

Note: The MSDN Documentation for the “None” Export Policy appears wrong. It says “None” means there are no restricting. It actually means “The private key cannot be exported, there are no restrictions on exporting the public key.

The difference between exporting archiving and exporting is the number of times it can be exported. Archiving allows Export to be called once. Export allows it to be called multiple times.

You may not need to export the key to use it later though. In the above examples, we created ephemeral keys. These keys do not persist – they are deleted the moment they are disposed. Creating a non-ephemeral keys is easy. Instead of passing “null” as the key name, specify a key name.

CngKey.Create(CngAlgorithm.ECDiffieHellmanP256, "MyKey");

This key will be persisted to the storage. Later, even after the machine is rebooted, the key can be opened by calling CngKey.Open("MyKey");. This allows persisting the private key security without it ever have been exposed in plaintext.

Using non-ephemeral keys in conjunction with archiving is good way to create a key, allow a user to back it up once, and store it in the KSP for later use. The user could later import the key if they move to another machine or need to restore it from backup.

So where are these keys stored? This all depends on the KSP being used. In our examples above, they are stored in the Microsoft Key Storage Provider, which lives on the computer and is managed by the operating system. However CNG allows storage in any registered KSP available. Another option might be a Smart Card. You can create a CngKey on a Smart Card by specifying the CngProvider.

CngKey.Create(CngAlgorithm.ECDiffieHellmanP256,
	"MyKey",
	new CngKeyCreationParameters
	{
   		Provider = CngProvider.MicrosoftSmartCardKeyStorageProvider
	});

When this code is run, I am prompted to insert a smart card into my smart card reader.

Smart Card Reader

Inserting a smart card that supports ECDH will persist the key onto the smart card. Likewise, I could specify an HSM if the HSM is a valid KSP by specifying the name:

new CngProvider("MyHSMKSP");

CNG is a very powerful cryptographic API that makes extensibility much easier. Using CNG makes key management much safer.

One place the CNG falls short in .NET is using symmetric algorithms with CNG, such as AES. This is rather a shame, since it would allow use of other AES modes of operation, such as AES-GCM which are available in Windows 8 today. CNG implementations of AES are available with 3rd party assemblies on CodePlex.

On the asymmetric side, CNG falls short with RSA. DSA, and Diffie-Hellman are supported however.

Making a self-signed SSL certificate (and trusting it) in PowerShell

I recently came across a new PowerShell cmdlet for creating self signed certificates in PowerShell:

$cert = New-SelfSignedCertificate -DnsName localhost, $env:COMPUTERNAME
	-CertStoreLocation Cert:\LocalMachine\My

This cmdlet is actually really helpful since it lets you specify subject alternative names in the certificate as well. The DnsName parameter uses the first value for the Common Name, and all of them for SANs. Since the cmdlet returns an X509Certificate2 object, it’s easy enough to use in other .NET API’s as well, such as adding it to the Trusted Root Authority:

$rootStore = New-Object System.Security.Cryptography.X509Certificates.X509Store -ArgumentList Root, LocalMachine
$rootStore.Open("MaxAllowed")
$rootStore.Add($cert)
$rootStore.Close()

And even creating an SSL binding to an IIS Web Site:

New-WebBinding -Name "Default Web Site" -IPAddress "*" -Port 443 -Protocol https
pushd IIS:\SslBindings
$cert | New-Item 0.0.0.0!443
popd

This is a handy little Cmdlet, though it lacks some flexibility, such as specifying the validity period and RSA key size (though the default 2048 is just fine).

Compiler warnings are your friend

I’m a big fan of tooling to speed of development. I’m also an advocate of ensuring that your tools aren’t a crutch – more of a performance enhancer. Generally, tools such as IDEs, compilers, and code generation, are there to save time and effort – as long as you understand what they are doing for you.

That’s why I’m always surprised when people ignore the help their tool is trying to give them. A particular one for me is compiler warnings, and people or teams having hundreds or thousands of compiler warnings.

General responses I get when asked about it are, “Oh, that’s just the compiler complaining, it’s safe to ignore them.” Problem with that though, is it makes it impossible to find genuinely helpful compiler warnings in the sea.

557 Compiler Warnings

There are lots of ways to solve the problem with compiler warnings.

One option is to just fix the issue the compiler is warning about. The compiler warning is making a suggestion, and sometimes (or most of the time) it’s right – so fix the problem. If the compiler tells you a region of code is unused, then you can remove it safely. These are always the most helpful warnings and why you want a pristine warning list. Often enough, the compiler will catch something that’s easy to gloss over, such as a double free, precision loss when casting numeric types, and the like.

Warnings can be ignored, too. Some warnings, you or your team might just not find helpful, or produce more noise than help. Any good compiler comes with a way to disable warnings, either with a code pragma for certain regions of code, or a compiler flag to completely disable the warning. One that comes up for me often enough is in some places compiler directives are used for DEBUG builds, which can confuse Visual Studio about code that will never run.

#pragma warning disable 0162
#if DEBUG
	return TimeSpan.MaxValue;
#endif
	return new TimeSpan(0,5,0);
#pragma warning restore 0162

Normally, Visual Studio would give a compiler warning that the second return value is unreachable while in debug configuration.

I try to avoid this kind of code in the first place – but sometimes it cannot be helped. Various other compilers support a similar notion, for example Clang:

#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wunused-variable"

//Region

#pragma clang diagnostic pop

Sometimes compiler warnings are introduced with no clear plan on how to clean the up, and they’ll sit there for ages. A good example might be obsoleting a method or function that is called in hundreds of places.

[Obsolete("This is obsolete and should not be used.")]
public void ObsoleteMethod()
{
	//...
}

I’m generally not a fan of obsoleting methods, except in the rare case that you’re an API or SDK provider and you need to inform consumers of your library. Otherwise, favoring refactoring and completely removing the method is a better option.

In some circumstances some warnings might be useful as actual errors, too. Security warnings are generally some that I’ve taken to escalating to a full error, such as using strcpy or Objective-C’s violation of ‘self = [super init]‘ rule.

In all, I see compiler warnings often ignored when they provide tremendous value to you or a team. It can be a bit tedious to keep compiler warnings “clean”, but it’s well worth the effort.

Investigating IIS 8.5 Features in Server 2012 R2 Preview

Windows Server 2012 R2 hit MSDN Subscribers on Tuesday. I’ve now just gotten the time to download it and play around with it, and look for new things. Besides the the obvious, I noticed that Internet Information Services 8.5 made its appearance.

IIS 8.5

Suspended AppPools

I started poking around looking for new features, and there were a few things that caught my eye. The first is a new setting on AppPools call “Idle Time-out Action”. I haven’t been able to find out a lot of information about this online yet. It has two options, “Terminate” and “Suspend”. “Terminate” is how AppPool timeouts are handled now. If the AppPool is inactive beyond the threshold, the w3wp.exe process(es) exits. Under Suspend, it appears that the process keeps running after the timeout, however it releases most of its resources. However what I did notice is that the AppPool is able to service requests much faster once you wake it.

Suspend

The implications of this aren’t clear, I will continue to toy around with it to better understand how it works. What I am unclear about is how this may or may not affect AppDomains from .NET loaded into the worker process. Do the AppDomains get unloaded? That I am not sure of.

The CLR Version

They’ve also made a small cosmetic, though overdue change. AppPools no longer call it the “.NET Framework Version”, now it’s call the “.NET CLR Version”.

Server 2012

Server 2012 AppPool

Server 2012 R2

Server 2012 R2 AppPool

 

The distinction is important, and has caused a lot of confusion about how IIS should be configured. When the .NET Framework 3.5 came out with Visual Studio 2008, many people had assumed they needed to change their AppPools to the .NET Framework 3.5 in IIS if they developed a website using .NET 3.5. That’s not the case. The Application Pool has only ever cared about the CLR version of that framework. .NET 3.0 and 3.5 were built on the 2.0 CLR, as well as 4.5 still uses the 4.0 CLR. This is a long overdue change that I welcome.

Logging to ETW

IIS 8.5 also has improved logging functionality with the ability to log to Event Tracing for Windows (ETW), in addition to the classic IIS logging. This will help unify logging across the board for Administrators and better central management of IIS’s logs.

ETW Logs

Overall IIS 8.5 isn’t a huge change from IIS 8, though it has some nice goodies. There are probably also things that I missed. If I did, let me know!

Aquarium and Aquatic Hobbyist site for StackExchange

I’m a big fan of StackOverflow, and the StackExchange community in general. There is currently a site proposal for Aquarium and Aquatic Hobbyist

This is a family hobby, so if you have interest in this, please help us get this site off the ground by following it and voting for questions.

Update

It looks like this StackExchange proposal wasn’t accepted. I’m disappointed, and hopefully we’ll get an opportunity to try again in the future.

First Impressions of Azure

A few days ago, I finally got time to sit down and spend a few hours with the Azure platform. After using it for roughly 4 days, developing and tweaking a project I am working on, I wanted to share my experience so far.

Most, if not all of my personal projects are running on the AWS platform (including this blog). From DNS with Route 53, to CDN with CloudFront, I’ve invested a lot of time into Amazon. All of my projects so far have always run on a virtual machine that I had full reign over. The project I am currently working on seemed to fit neatly into a hosted application. The appeal to an application as opposed to a full virtual machine is how easy Cloud providers like AWS make it to scale. Scaling with virtual machines requires a virtual network, a load balancer. The selling language I kept hearing from AWS is it just works.

My project was going to be a learning experience with Node.js (another topic for another day), and the natural thing I jumped to was to put this in AWS’s Elastic Beanstalk.

Though I couldn’t help but keep looking at Microsoft’s offering, Azure Web Sites. Azure makes a 3 month trial pretty easy to do, so I thought I’d try both, compare, and stick with whichever I found suited my needs more.

Getting Started

Getting started with Elastic Beanstalk and Azure is both simple. Click a button, and both spin up a web site / application for you. Azure was fast though, so I started there. Immediately Azure had a huge advantage. I can use Git to deploy. I’m a long time lover and user of Git, and the ability to git push azure master is huge. I decided to start where anyone would, and create a hello world node.JS site:

var http = require('http');
var port = process.env.PORT || 8080;
var server = http.createServer(function (request, response) {
  response.writeHead(200, {"Content-Type": "text/plain"});
  response.end("Hello World");
});

server.listen(port);

Satisfied with my results on localhost, I did a git init, added the azure remote, and git pushed. Yeah, it was that easy. My Node.js app was available on Azure.

Elastic Beanstalk was still creating my application.

The Good and Bad

Finally though, Beanstalk was done creating my application. From there though, I didn’t really know what to do. It wasn’t as simple as a git push. Beanstalk required me to zip up all my files, taking care not to include things I didn’t care about like my .git directory, and upload it.

While I am sure Beanstalk provides an API like all other Amazon services, this just didn’t feel right. At this point I am ready to call a cloud platform that doesn’t allow publishing content via source control a deal-breaker. Azure so far has a clear lead over Beanstalk for me, though both have their pluses and minuses when I started to peer under the covers of both a bit more.

Beanstalk gives you quite a bit of flexibility. It is built on their AWS platform, and it doesn’t try to hide that fact. It lets you configure finer details of how the load balancer works, the security groups, and even the virtual machines the application is running on, to an extent. Beanstalk also pushes a lot of the building and deploying locally to you, which might be more desirable if you need to deploy to a local test environment, first. If you have an existing build process, Beanstalk is quite friendly.

Azure does all of the work for you, from deployment to compiling your project (if it requires it). This left me worried, “What do I do when I need to customize some aspect of the deployment?” The answer though is wonderful. Azure’s deployment mechanism is called Kudu, and it’s pretty easy to customize. You can include the deployment script in your repository. If you don’t have one in there, it just uses a default. Customizing it is easy though. With the Azure CLI Tools installed, run azure site deploymentscript --node in the root of your git repository and it will create the default deployment script. Customizing it is as easy as modifying the deploy.cmd file.

So for now, I am going to stick with Azure for Web Sites. I still like the AWS platform, but Azure flat out worked better for me, and with Node.js of all things. Thinking what Azure was 2 years ago, they are moving extremely quick to keep it a contending platform. Beanstalk feels like an afterthought to me, while Amazon focus on their extremely enterprise features like Redshift. I haven’t done a cost analysis between the two to determine which would be more cost effective, though a quick glance tells me they should be pretty comparable.

Using ECC for an SSL Certificate

Recently I’ve been toying with the idea of using ECCDSA instead of RSA for SSL certificates. Using an ECC key of 256 is approximately as strong as a 3072-bit RSA key, which is what drew me towards them. However I found it a little difficult to get the Certificate Authority to issue the right kind of certificate. Eventually I got it working using CertReq.exe, here is the INF I used to generate the certificate.

[Version]
Signature="$Windows NT$"

[NewRequest]
Subject = "CN=yourcommonname"
Exportable = FALSE
KeyLength = 256
KeyUsage = 0xA0
MachineKeySet = TRUE
KeySpec = 0
ProviderName = "Microsoft Software Key Storage Provider"
ProviderType=12
KeyAlgorithm = "ECDSA_P256"
HashAlgorithm = "SHA256"

[Strings]
szOID_SUBJECT_ALT_NAME2 = "2.5.29.17"
szOID_ENHANCED_KEY_USAGE = "2.5.29.37"
szOID_PKI_KP_SERVER_AUTH = "1.3.6.1.5.5.7.3.1"

[Extensions]
%szOID_SUBJECT_ALT_NAME2% = "{text}dns=domain1&dns=domain2"
%szOID_ENHANCED_KEY_USAGE% = "{text}%szOID_PKI_KP_SERVER_AUTH%"

[RequestAttributes]
CertificateTemplate= WebServer

With this template I was issued a ECDSA_P256 certificate, which is exactly what I wanted. The usage of a SAN is optional, however it I needed to specify it as well, so I left it here.

– “Case for Elliptic Curve Cryptography”, NSA <http://www.nsa.gov/business/​programs/elliptic_curve.shtml>

Scaling WordPress Part 2: CDN

Last time we looked at the performance of my site, we discovered that the DNS was a source of the problem due to slow resolution time. In this next part, we’ll try and tackle two things at the same time.

A CDN and Cookie-Free Domains

I’m doing this a bit out-of-order compared with how I actually configured my server, this I’m presenting this in a way that will require the least amount of backtracking. If setting up a CDN seems like too much work, you can skip this one and wait for the next part (GZipping).

One of the things YSlow gave me an “E” on (wow, worse than an F) was cookie-free domains. I’m using Google Analytics to track visitors on my site, and the way it accomplishes this is with cookies.

Cookies are a perfectly valid thing these days, however setting a cookie for my domain, vcsjones.com, meant that all static content like CSS, JavaScript, and images were sent with the cookie. Analytics was setting cookies when someone hit my site, and the web server was happy enough to send along the cookie header with this static content.

The typical solution for this is to use a cookie free domain, i.e. use a different domain for your static content. I couldn’t use a subdomain like static.vcsjones.com because cookies set at vcsjones.com also apply to all child domains. My only option to using a cookie free domain was to purchase another domain and serve static content from there, like staticvcsjones.com.

However, before I did any of that, YSlow suggested I use a Content Delivery Network, or CDN, for my static content. If I moved my static content to a CDN, then I would be taking care of the cookie free domains problem as well.

On the theme of using Amazon for everything, I settled on giving CloudFront a shot. CloudFront is Amazon’s content delivery network solution.

I had a couple of choices on how I wanted to set this up.

Host my static content elsewhere (bad)

I had originally gone the route of trying to move all of my static content to Amazon’s S3 solution. CloudFront is easy to configure to serve content from an S3 bucket, but this turned out to be a troublesome approach from a maintenance standpoint. I would not only need to move all of my uploads to S3, but also the theme content, and other “innards” of WordPress. This would be difficult keeping things up-to-sync. I wasn’t able to find a WordPress plugin that could keep all of that in sync for me, and manually uploaded content to S3 seemed tiresome. It would also mean any time I upgraded a plugin, theme, or WordPress itself, I would need to move all of those files to S3 again. This didn’t seem like a workable approach.

Leave my static content as is (good)

I could leave my static content exactly where it is, and set up CloudFront to use my own server as a source of static content. This seemed like the best approach. If my file content changed, the CDN would pick it up (eventually). Any new files would instantly be picked up by CloudFront, and I wouldn’t have to change where WordPress’s physical files were. That would better for upgrading and plugin changes.

Making it all work

I had initially set up CloudFront to point to vcsjones.com. That worked well enough, CloudFront served anything from my server in its edge cache. This however, left me with a bit of a yucky feeling: any dynamic content could possibly end up in CloudFront’s cache. I wasn’t a big fan of that, so I decided to truly separate my static content from my dynamic content, even if they were all in the same place.

I first brought up a new subdomain, static.vcsjones.com. I had it pointed to the same physical location as vcsjones.com. At first, this was a true 100% copy of vcsjones.com. My next intention was to configure nginx, my server, to only serve static content from static.vcsjones.com, and block it from vcsjones.com. As a last step, I would configure CloudFront to use static.vcsjones.com as an origin.

Configuring NGINX

NGINX is a great server, and I love its flexibility and speed. My static content virtual site configuration looked like this:

server {
    listen          80;
    server_name     static.vcsjones.com;
    root            /var/www/wordpress/;

    location ~* \.(?:js|css|png|jpg|jpeg|gif|ico)$ {
        expires max;
        add_header Pragma public;
        access_log off;
        break;
    }

    location / {
        return 404;
    }
}

This is an abbreviated version of what I currently have running now. So static.vcsjones.com will only serve static content, which helps mitigate the possibility of an search engine finding this domain somehow and dinging my SEO rankings for serving duplicate content.

So, vcsjones.com is being used to serve dynamic content, static.vcsjones.com is used to serve only static content, and my CloudFront CDN is using static.vcsjones.com.

Finally, I blocked static content from vcsjones.com with a few exceptions.

location ~* \.(?:js|css|png|jpg|jpeg|gif|ico)$ {
    if ($http_referer ~ "wp-admin") {
        break;
    }
    if ($request_filename ~* jquery\.js$) {
        break;
    } 
    return 404;
}

This blocks static content from vcsjones.com unless it is jquery, or if the referer contains wp-admin.

jQuery is a bit of a special beast in WordPress. It doesn’t appear to be easy to cache via a CDN because WordPress dynamically builds it depending on what the extensions ask for.

wp-admin is whitelisted as a referer because we don’t want to break the admin section of WordPress, which appears to be a little more difficult to fix URLs in.

Fixing URLs in WordPress

So now with this fancy-pants CDN, I actually needed to use it. For now, I am using two WordPress plugins to accomplish this.

CDN Rewrite works well to rewrite content from WordPress’s themes and includes. This changes my theme to load its CSS, images, and JS from CloudFront. The exception to this is jQuery. jQuery in WordPress is an odd beast, it appears that WordPress dynamically builds jQuery depending on what pieces of it is needed. For that purpose, jQuery is not served over my CDN to my site, which is rather unfortunate (though it is gzipped).

Real Time Find and Replace works well to rewrite content in actual post bodies, that CDN Rewrite doesn’t seem to do.

My intention is to eventually eliminate both by fixing the actual links in post bodies, and creating a child theme. This will be a small improvement such that WordPress has less processing to do on the rendered HTML.

Due to the odd handling of jQuery in WordPress, neither of them catch the URL for jQuery, so jQuery is served outside the CDN, which is actually rather unfortunate.

CloudFront CNAME

One option CloudFront has is the ability to specify a CNAME in your DNS to CloudFront. I initially took this approach because it allowed more control over where my static content was located. If I use my CDN endpoint, dsdujlkb89x0f.cloudfront.net, that would mean if I spent all that time fixing static content URLs, I would have to fix them again if I opted to not use CloudFront. Initially I had setup cdn.vcsjones.com to point to CloudFront, but because it is a subdomain of vcsjones.com, it was getting Google Analytics cookies. Instead I opted to just use CloudFront’s domain. This keeps my static content cookie free, and also means I won’t pay for DNS lookups in Route 53.

Gotchas

This CDN approach works well for me, with a few issues that are acceptable trade offs, or is easy to work around.

Because I disabled static content on vcsjones.com to better separate the static content, the Administration bar is a little broken when used outside of wp-admin.

CloudFront has no easy to purge all of its cache. If you need purge something from CloudFront, you need to specify the path you want to purge, including the query string.

More of this series

  1. Scaling WordPress Part 1: DNS
  2. Scaling WordPress Part 2: CDN

Scaling WordPress Part 1: DNS

A week ago a friend told me that my website was a little slow to load. “Great” I thought, I guess I need to improve the specs of my little website server.

As I was looking at my server’s details and utilization, I came to the realization that whatever reason my site was slow, it was not because the server was overworked. In fact, it was sitting there idling most of the time.

When I gave it some thought, I realized that while my website was working just fine, I had never spent any amount of time tweaking the server for performance. I decided to see if tweaking some caching settings would improve things a bit.

And man, did they improve. I started by using a browser extension called YSlow to determine where I needed to focus my improvements, and start with some low-hanging fruit. Initially, YSlow gave my site a big orange D as a grade. I decided over the weekend to see how high I could improve things. My journey required tweaking of WordPress, DNS, and getting a CDN involved.

DNS Matters

The first thing that got pointed out to me was that my DNS was slow. Several hundred milliseconds to resolve my site. I never really gave DNS performance that much thought. I had just went with the free DNS that my domain registrar provided. While free, I got what I paid for.

Almost all of my infrastructure is on Amazon AWS, so I looked at Route 53 to transfer my DNS to. This would increase my costs slightly, but it boiled down to hardly a dollar extra a month. The zone costs $0.50 per month, plus another $0.50 per million queries. This won’t break the bank, if my site ever gets that popular, I have bigger problems I need to worry about. Additionally, later on, we’ll be offloading a lot of DNS queries to a CDN.

One thing with Route 53 that you may want to tweak is the default TTL (Time-To-Live). It defaults to 300 seconds, which is rather low considering my DNS settings don’t change that much. Rather I set it to 2 hours (7200 seconds) so the DNS would be cached longer, thus reducing the number of lookups. I’d be surprised if I paid more than $0.50 a month for DNS queries.

This had a moderate, measurably small improvement on the performance of my site, but only the first time you hit it if the DNS isn’t cached. I still had a ways to go, but it was a step in the right direction.

Next time, we’ll look at setting up a CDN (Content Delivery Network) for serving static content.

More of this series

  1. Scaling WordPress Part 1: DNS
  2. Scaling WordPress Part 2: CDN

Passing Public Keys Between Objective-C OpenSSL and .NET

I had a bit of a fun time working on a simple key exchange mechanism between C# and Objective-C using OpenSSL. My goal was on the Objective-C side to generate a public/private key pair, and pass the public key to C# in a way that I could use it to encrypt small amounts of data (like a symmetric key).

This proved to be a little challenging to me, as I didn’t want to resort to using a 3rd party solution in .NET like BouncyCastle or a managed OpenSSL wrapper. Turns out, it’s not that hard, if just a little under-documented. Starting with Objective-C, here’s how I am generating an RSA key pair.

Assuming you have an RSA* object from OpenSSL, you can export the public key like so:

-(NSData*)publicKey {
    int size = i2d_RSAPublicKey(_rsa, NULL);
    unsigned char* temp = OPENSSL_malloc(size);
    //the use of a temporary variable is mandatory!
    unsigned char* copy = temp;
    int keySize = i2d_RSAPublicKey(_rsa, &copy);
    NSData* data = nil;
    if (keySize > 0) {
        data = [[NSData alloc] initWithBytes:temp length:keySize];
    }
    OPENSSL_free(temp);
    return data;
}

For illustrative purposes: Don’t forget to do error checking!

So now we have an NSData, but how is this public key actually stored? How can I use this in .NET?

The first thing to understand is the format by what format i2d_RSAPublicKey exports data. This exports the data in ASN.1 DER format, and getting that into an RSACryptoServiceProvider is possible with no 3rd party support.

private static readonly byte[] _nullAsnBytes = new byte[] {0, 5};

public RSACryptoServiceProvider GetCryptoServiceProvider(byte[] asnDerPublicKey)
{
    const string RSA_OID = "1.2.840.113549.1.1.1";
    var oid = new Oid(RSA_OID);
    var asnPublicKey = new AsnEncodedData(oid, asnDerPublicKey);
    var nullAsnValue = new AsnEncodedData(_nullAsnBytes);
    var publicKey = new PublicKey(oid, nullAsnValue, asnPublicKey);
    return publicKey.Key as RSACryptoServiceProvider;
}

The 1.2.840.113549.1.1.1 value is the OID for RSA, or the actual header name szOID_RSA_RSA. We use a ASN.1 value of null ({0, 5}) for the parameters, and then we pass the AsnEncodedData to the PublicKey class, from which we can obtain an RSACryptoServiceProvider from the public key.

Together, these two code snippets allow working with RSA public keys in Objective-C (iOS or Mac) and in .NET.