My cygwin setup

Part of changing jobs meant that I had to rebuild my Windows virtual machine. Most of which I’ve managed to get down to a science at this point, but remembering all of the little changes I’ve made to Cygwin over the years has been lost. I thought, “make a blog post” since it’ll help me remember, and possibly help others.

Ditching cygdrive

I don’t really like having to type /cygdrive/c – I’d much rather type /c, like Git Bash does out of the box.

The solution for this is to modify the /etc/fstab file and add this line at the end:

c:/ /c fat32 binary 0 0

Don’t worry about the “fat32″ in there, use that even if your file system is NTFS. You can do this for arbitrary folders, too:

c:/SomeFolder /SomeFolder fat32 binary 0 0

Now I can simply type /SomeFolder instead of /cygdrive/c/SomeFolder.

Changing the home path

Cygwin’s home path is not very helpful. I choose to map it to my Windows home directory (again like Git Bash). The trick for this is to edit the file /etc/nsswitch.conf and add the following line:

db_home: /%H

This sets the home to your Windows Home directory. Note that this change affects all users, so if you have multiple users on Windows, don’t hard code a particular path, instead use an environment variable like above.

Prompt

I typically set my prompt to this in my .bash_profile file:

export PS1="\[\e[00;32m\]\u\[\e[0m\]\[\e[00;37m\] \[\e[0m\]\[\e[00;33m\]\w\[\e[0m\]\[\e[00;37m\]\n\\$\[\e[0m\]"

This is similar to the one Cygwin puts there by default, but does not include the machine name.

vimrc

Not exactly cygwin related, but here is a starter .vimrc file I use, I’m sure I’ll update it to include more as I remember more.

set bs=indent,eol,start
set nocp
set nu
syntax on

If anyone has some recommendations, leave them in the comments.

New Pasture

For the past eight and a half years, I’ve enjoyed many different challenges at Thycotic. From working on some tough security implementations to consulting. I’m always interested in new challenges, seeing what else lies beyond where I am now. That is why I’ve accepted employment with Higher Logic. I’ll be joining their team continuing what I do best: solving problems and doing my best to make customers happy.

I’m looking forward to it.

When a bool isn’t a bool

Jared Parsons and I got into an interesting discussion on Twitter, and uncovered an interesting quirk in the C# compiler.

To begin, Jared already did the heavy lifting of the issue at hand with how a CLI bool can be defined in his blog post Not all "true" are created equal. Jared’s example is different than mine, here is an independent issue I created that is the same problem.

byte* data = stackalloc byte[2];
data[0] = 1;
data[1] = 2;
var boolData = (bool*)data;
bool a = boolData[0];
bool b = boolData[1];
Console.WriteLine(a); //True
Console.WriteLine(b); //True
Console.WriteLine(a == b); //False

Despited both a and b being “true” boolean values, they are not equal to one another. JavaScript has a similar issue, which is why there is the not not or coercion operator. You’d think a similar trick would work in C#:

Console.WriteLine(a == !!b);

This actually, still, prints out false. Yet this prints out true:

var c = !b;
var d = !c;
Console.WriteLine(a == d);

Seems like they are identical, no? Semantically they are, but functionally, they are not. In the former case, the C# compiler is optimizing away the double negation since it thinks it is pointless. Introducing the intermediate variables tricks the C# compiler and the double negation is no longer optimized away, thus the coercion is successful.

This seems to be a rare occurrence where the C# compiler is performing an optimization that actually alters behavior. Granted, it seems to be an extremely corner case, but I only found out about it because I actually ran into at one point. In my option, the removal of the double negation at compile time is a bug. This optimization does appear to be the C# compiler, not the JIT. The resulting IL is:

IL_001a:  ldloc.2
IL_001b:  ldloc.3
IL_001c:  ceq
IL_001e:  call       void [mscorlib]System.Console::WriteLine(bool)

Notice that it just loads the two locals and immediately compares them, it makes no attempt to invert the value of the third local (which is “b”) in this case.

A FIPS primer for Developers

FIPS is a curious thing. Most developers haven’t heard of it, which to I say, “Good”. I’m going to touch very lightly on the unslayable dragon “140-1 and 140-2″ part of FIPS.

Unfortunately, if you do any development for the Federal Government, a contractor, or sell your product to the government (or try to get on the GSA schedule, like 70) then you will probably come across it. Perhaps you maintain a product or a library that .NET developers use, and one of them says they get an error with you code like “This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.”

Let’s start with “what is FIPS?” A Google search will tell you it stands for “Federal Information Processing Standard”, which is standard controlled by the National Institute of Standards and Technology (NIST). That in itself isn’t very helpful, so let’s discuss the two.

NIST is an agency that is part of the Department of Commerce. Their goal is to standardize on certain procedures and references used within the United States. While seemingly boring, they standardize important things. For example, how much does a kilogram exactly weigh? This is an incredibly important value for commerce since many goods are traded and sold by the kilogram. NIST, along with the International Bureau of Weights and Measures, standardize this value within the United States to enable commerce. NIST also standardizes many other things, from how taximeters to the emerging hydrogen fuel refilling stations.

NIST also standardizes how government agencies store and protect data. This ensures each agency has a consistent approach to secure data storage. This is known as the Federal Information Processing Standard, or FIPS. While FIPS touches on things that are not related to security and communication, such as FIPS 10-4 which standardizes Country Codes. However the one subject that eclipses all of the others in FIPS is data protection. From encryption (both symmetric and asymmetric) to hashing. FIPS attempts to standardize security procedures, data storage and communication, and maintain a set of approved algorithms.

FIPS 140 encompasses requirements for cryptographic “modules”. FIPS refers to them as modules and not algorithms because a “module” may be an actual piece of hardware, or a pure-software implementation.

There are two key things to distinguish in the context of FIPS 140: validated and approved functions.

An approved function is a function, or algorithm, which FIPS 140-2 accepts, as documented in annex A. This means that for certain applications, certain algorithms must be used as applicable to FIPS 140-2. In the case of Symmetric Encryption, approved algorithms are AES, 3DES, and Skipjack. Each of these algorithms have their own NIST publication. AES’s for example, is NIST Special Publication 197 and 3DES is Special Publication 800-67.

Bringing this back into the context of .NET, <a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.aesmanaged.aspx">AesManaged</a> is a class that implements the AES algorithm. However there is another implementation of AES the .NET Framework, called the <a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.aescryptoserviceprovider.aspx">AesCryptoServiceProvider</a>. They appear to be completely identical in functionality, produce the same results, and are indistinguishable from each other. There is one key difference between them: the former is not validated, while the latter is.

Validation is where NIST actually tests the implementation of the algorithm for correctness with the Cryptographic Algorithm Validation Program (CAVP). The purpose of this program is to test vendor implementations of these algorithms and different modes of operation for each algorithm (like CBC or CFB). The AesManaged class, while implemented correctly, has not been verified by NIST. This is a common theme among all cryptographic functions in .NET that end in Managed. AesManaged, Sha1Managed, etc. are all not FIPS validated. From an implementation perspective, the *Managed implementations are implemented in pure managed code. The algorithms that end in CryptoServiceProvider all use platform invocation to shell out the functionality to Windows. More specifically, Windows’s Cryptographic Service Provider (CSP) functionality.

Why bother having a *Managed implementation though? Why not just use the *CryptoServiceProvider all of the time?

Recall that .NET, when originally launched, was very Code Access Security (CAS) heavy (another post for another time). Before IIS supported Application Pools, IIS’s only means of separating .NET web applications from each other was to put them in Medium Trust. If you recall back in the .NET 1.x days, many “shared” web hosts ran websites in Medium Trust. Otherwise, my web application could access content and resources from other sites on the same server.

Medium trust also meant no platform invoke, so the *CryptoServiceProvider classes wouldn’t work. To have no support for encryption in Medium Trust would be a problem, so they algorithms were implemented in pure managed code. At the time, Managed implementations were also likely faster. Platform invoke has a performance penalty. Today, that performance difference is likely smaller. The Managed implementations cannot take advantage of new processor features, like AES-NI. The CryptoServiceProvider implementations, can, and do.

The last outstanding question might be, why not just put the Managed implementations through the CAVP program? In a nutshell: cost and time. The program takes a while to complete, costs a lot of many, and if anything changed in those algorithms, they’d need to get re-validated. The number of people that need a FIPS validated implementation is low and are unlikely to be running in medium trust. For those people, using the CryptoServiceProvider implementation makes the most sense.

There are some important things to note about FIPS. FIPS validated algorithms then, are not in any way “stronger” or “better” than those that aren’t. Rather, it’s a matter of policy that the algorithm has been reviewed for correctness. This also brings up a matter that not all good algorithms are approved, either. Some algorithms, like Twofish, are considered secure for use in production, yet have no badge of approval from NIST. Other algorithms are left off of the approved function list because they are weak, and shouldn’t be used. The Data Encryption Standard (DES – not to be confused with 3DES) and MD5 are two algorithms that are used today, broadly, but contain enough issues to warrant security concerns. These algorithms remain used today because of legacy platforms. The RADIUS protocol remains one protocol that continues to use MD5 often. For those that need to interact with these old platforms using old protocols, they don’t have much of a choice. This puts people in a tough situation where supporting these platforms can be outright impossible if FIPS validation is required. This is matter of choice: you can either have your feature and no FIPS, or and FIPS and no feature. You cannot have both.

The Windows operating system itself does not run fully FIPS out of the box. BitLocker, SChannel, and some other components of Windows do not follow FIPS validation unless configured to do so. That is done in the Local Security Policy.

Screen Shot 2015 03 11 at 3 12 27 PM

When enabled, the SChannel functionality of Windows which implements SSL and TLS strips away non-FIPS validated algorithms, such as cipher suites using MD5 or CAMELLIA.

This setting also interferes with the .NET Framework. Those classes, AesManaged, Sha1Managed, even Md5CryptoServiceProvider, etc, will all throw an exception if they are used and this policy setting is enabled. It’s arguable if this is a good thing to do. This follows the letter of the law, but can be a tricky issue for some. For starters, hashing algorithms like MD5 and SHA1 can be used for non-security applications, such as caching (a web server might use MD5 for E-Tags, for example) or non-critical file integrity. Yet this policy setting is unable to distinguish why a developer is using a particular algorithm, but it ceases to function, anyway. The exception the algorithms throw is

This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.

This can be worked around with a setting in the .config file, but this setting, like the one in Security Policy, is a big hammer. It completely disables the check.

Some might work around this issue by copying and pasting some random StackOverflow code that implements the MD5 algorithm, which does not check that policy setting, and “works” even when that setting is checked. However this grossly violates the spirit of FIPS. If an audit of the application’s code where to occur (which is much more likely for those that are being asked to follow FIPS), it would fail the audit.

This brings me back to my original point: if no one is telling you to care about FIPS, then don’t care about it – it can be a headache.

On Pair Programming

I’ll take a slight detour from my usual “fact” based blog posts and focus on a matter of opinion, one that I have swung like a pendulum on myself, which is pair programming. Pair programming is something I’ve been using it for about 8 years now, and I think it is time I finally wrote some thoughts on it. My experience with pair programming has been “pair program 100% all the time”. If there is work to be done, we tried to pair program with it.

For those unfamiliar with pair programming, the gist of it is two developers, one computer. You have two (or four, six, etc.) monitors connected to a single computer, with two keyboards, two mice, and two developers work together to accomplish a goal.

Pair Programming
A quad-monitor desktop setup for two-screen pair programming.

Naturally, this might seem a bit confusing at first, but the origins hark back to the days of Extreme Programming – XP for short – something that I was first exposed to in 2006. The two developers work together, writing the same code, helping each other. One person is doing the “driving” of the computer, while the other is watching for mistakes, providing input, or mentoring. Eventually they switch. Really, there are books on the subject of how to do pair programming, but that’s the single paragraph version of it. The uninitiated might first think, “Isn’t pair programming a tremendous waste of time?” Pair programming promises to make up on the additional use of a developer by offsetting it with increased quality. The argument is persuasive – bugs and defects are often the most expensive things to fix. If they can be limited during the development phase, and better quality and architecture is a result, then you end up with a net positive. You also have free knowledge transfer between team members.

Deeper though, there are too many problems with pair programming. I won’t argue that code quality usually ends up being better, and some bugs get caught. However the cost associated with the gains is too steep. I could just as easily require three people to code review work. That would also result in better quality. So increased quality cannot be the only deciding factor as to whether or not Pair Programming is successful for a team.

When I was first introduced to pair programming, I was a junior but capable developer. Pair programming really worked well in my favor. Rather than be thrown into a code base I had absolutely no familiarity with, I was working with people that had experience with the code, and working with a senior team member. As we were working together, I was not only seeing the code produced, but his reasoning and thought process behind what was being accomplished.

At the time, I didn’t think much of how the senior developer felt about pair programming. Eight years later, I have a pretty good idea – it’s hard. Being in a constant mentoring mode while being driven by hard deadlines is a constant act of balancing, one that after doing years and years of all the time, takes its toll. This “senior-junior” pair really only benefits a single person – the junior. This however detracts from the senior developer’s sense of accomplishment, which is something most senior developers thrive on. The senior developer’s goal is not to educate the junior person, it is to get things done. At worse, this can attribute to burn out to that senior developer. The list of potential issues goes on, from issues of engagement to personality.

I’m not writing this to shoot pair programming down. I just dislike dogma. There is a right time and a wrong time for everything, and applying a principle before a situation has been fully understood is asinine. Pair programming, like anything, should be understood if it is applicable in a given situation. Applying ad-hoc pair programming I think can be powerful. Rather than doing pair programming all of the time, asking a colleague for a few minutes or an hour of their time to work through something difficult is a totally normal thing to do.

I would even go so far any take it a step further and replace pair programming with actual mentoring and coaching time if you need to get a junior up to speed. If structured mentoring is put in place, the junior developer gets better results. The senior developer is focused on that, so it doesn’t detract from their sense of accomplishment, rather it contributes to it.