Archive for December, 2015

Boy\Girl Scout Coding

When I was a young lad in the Boy Scouts, earning a total of 3 badges I believe, we had a motto when going camping. “Leave the campsite cleaner then you found it.” We always did, and I still do to this day. I always thought that was a great little motto, for way more then just camping.

5812 DG 0084Everyday in our jobs we get to go into other peoples code, whether to fix a bug (my code), or add a feature (everyone else’s). When we’re in there we really have two options. Option 1, just doing what we need to do and Option 2, leaving the file a little cleaner then we found it.

Option 2 is what I like to call “Boy/Girl Scout Coding”. It’s not a full blown refactor of the file, or making massive changes that need to be regression tested but small improvements to the file that make it cleaner for others. The changes might not even be functional changes to the file, just cleaning up the area around the campsite.

I have an internal list of operations that I like to do, with the assist of ReSharper and my other extensions that make most of the changes very easy.

  1. Ctrl+K,D the file.
  2. Remove unused using declarations.
  3. Remove commented out code.
  4. Fix spelling in string literals and comments (I always have spelling errors)
  5. Read comments and ensure they are still valid.
  6. Add/Remove/Fix/Regorg Regions.

Using Ctrl+K,D is a great start, it is a Visual Studio keyboard shortcut for Format Document. We should all have our settings the same, or using EditorConfig to keep our settings the same. But as we know anything can happen. It’s a quick way to keep everything looking nice. I am very guiltily of leaving in commented out code, especially when I’m working from a template or another page. But once it’s checked in we can always look at the history of the file and get it back, so keeping it around it redundant.

Using the SpellChecker extension it’s really easy to fix spelling mistakes (that I make all the time) in our string literals and comments. I usually leave spelling issues in code (like variable names) the way they are as that requires more testing. I’m a firm believer in self documenting code but there are always very complex blocks of code, or hacks, that require comments, give them a once over to ensure they make sense. Finally #6, I like regions, but others may not. If they are used in a file make sure they match the coding standards and the grouping makes sense.

All the above 6 things I do for Boy/Girl Scout Coding make no, or very little, impact on the underlying IL that gets emitted and no impact on the functionality of the file. It’s just making the human readable code a little cleaner for the next developer that needs to go into the file. The time investment is very minimal (especially with extensions) and in the long run will pay dividends to the company and other developers.

This is one of the articles that I posted internally at Paylocity, but I felt is useful to my readership here as well. If you’re a developer and looking for work check out the remote (and Chicago local) Careers at Paylocity, tell them Shawn sent you!

The Dark Art of Performance Optimization

Optimization is a tricky thing to do, difficult to quantify and can have troubling impacts on your whole project. This doesn’t mean you should never optimize your code, workflows or any other part of your operation. Quite the contrary, you should, but take this article as a signpost warning of dragons.


One should never attempt optimization too early in a project. This is because over the live of a project, everything changes. Frameworks get updated, architecture changes, bugs are found and fixed, dependencies evolve. During the early stages of any project there is a lot of change and honestly more important things to do. You need to get features out and in front of customers as quickly as possible. Performance is important, but the fastest working feature that a customer can’t use is worthless.

Because of that I leave all non critical optimization till as late in the process as I can. What is critical optimization? Basically if something is too slow that it fails (i.e. timeouts, crashes, etc) or doesn’t meet requirements (i.e. a stock trading app that takes hours targeted to day traders) are critical. Optimization along the lines of (it takes 2 seconds and I feel that’s too long) is not in the critical path.

Take this image on the right from Thenuzan’s article on the SDLC. Performance evaluation is the last step in the process. Personally I believe Performance Evaluation should occur after 3 and again after 4. Testing/QA can easily find some performance issues.sdlc

A lot of software projects are never really finished, till they are finished. Go-Live dates get pushed, betas go on forever, scope creep occurs, pivots happen, etc. But you can usually easily identify when the framework, architecture, tooling and environments are no longer in a state of flux that optimization activities can then proceed.

There are a lot of people out there who feel that performance should be feature 1 in everything they do. Which it’s always important to keep performance in mind context is important here. If your Amazon, your checkout/shopping cart performance is vital. But the vast majority of software developed is not the Amazon shopping cart and will never reach that level of importance to your company.

Perspective is important because performance optimization is complicated, time consuming, fragile and has wide ranging impacts to your project.



Optimization is a balancing act. Imagine you have 1 point you place in the triangle above, the more architecture and maintainability you have the worse your performance, but too far to performance the less maintainable and architecturally sound your project will be.

Due to the impact the process can have, it’s important to measure current performance, and set reasonable goals. “Making it as fast as possible” is not a reasonable goal. Personally I use a standard rule of thumb that no operation should ‘lock’ the UI for longer then 3 seconds. But you can’t assault your user with 3 second wait times either. Imagine a form with lots of drop downs, every time you select something from a drop down your user has to wait 3 seconds, you will have pissed off users. Measuring and goal setting will give a clear definition of success, which when going down the dark pit that is optimization is required.

Everything done during performance optimization is a trade off. Your trading off maintainability and/or architecture for that performance gain. Because of this changes related to the optimization process should be reviewed by at least a couple other developers. It’s easy to get tunnel vision when optimizing, and outside perspectives to the impact to others having to work in the same code base are vital. The tunnel vision can cloud other paths to achieving the performance gain, what you think is ‘the only way’ is rarely that.

Here are my 4 musts for performance optimization:

1. Set realistic performance objectives based on need/use case

2. Measure and Monitor Performance throughout the process

3. Share, present, get feedback and refine approach

4. Know when to start and more importantly when to stop

Performance is important, but always remember it comes at a cost. The more performat your app is, the more complicated the code usually becomes, the more difficult it is to maintain and the more convoluted the architecture is.

If you’re a First Responder or know one check out Resgrid which is a SaaS product utilizing Microsoft Azure, providing logistics, management and communication tools to first responder organizations like volunteer fire departments, career fire departments, EMS, search and rescue, CERT, public safety, disaster relief organizations.

Excessive Memory Size using the Microsoft Service Bus

During periods of high activity on the Resgrid system NewRelic would send out a large amount of warnings due to excessive memory usage. This was great information but I didn’t know why we would have excessive memory usage without corresponding CPU to match.


I like to design my API’s as stateless as possible and the Resgrid API’s are no different. They do a fair amount of business logic and work, but all the heavy stuff is all offloaded to backend workers, databases or Azure itself via the Service Bus. So this slow progressive rise in memory was troubling.

With work on moving Resgrid to App Services from Cloud Services I decided to run some JMeter tests against the API to see what was going on. After a bunch of read based operations tests, no memory leak. Then I analyzed what was occurring during the NewRelic alerts, there were a lot of sets.

I altered the JMeter tests and found this:

2015-12-17 16_18_15-Resgrid - Microsoft Visual Studio (Administrator)

That’s around 500MB in memory allocated to just the Microsoft.ServiceBus operations, particularly the Microsoft.ServiceBus.Common.IOThreadTimer. Some quick goggling and I found this issue on GitHub, so it seems at least as of end of 2014 this was a known issue.

I checked and the version of Microsoft.ServiceBus I had installed from Nuget was 2.4.1 which was from July 2014. Which was before the GitHub issue was posted (in late 2014). The current version of Microsoft.ServiceBus is 3.0.9 from November 2015. I updated and re-ran my JMeter tests and this was the result:

2015-12-17 17_59_49-Resgrid (Running) - Microsoft Visual Studio (Administrator)

After the update, no more memory leak, and an inspection of the objects in memory reveals that no large service bus objects in the heap. So if your using the Azure Service Bus and experiencing memory leaks I highly recommend looking at your Microsoft.ServiceBus version.

If you’re a First Responder or know one check out Resgrid which is a SaaS product utilizing Microsoft Azure, providing logistics, management and communication tools to first responder organizations like volunteer fire departments, career fire departments, EMS, search and rescue, CERT, public safety, disaster relief organizations.

Go to Top