Refreshing .NET Assembly Binding Redirects in a Visual Studio Solution

What Exactly are Binding Redirects?

Binding Redirects are there to solve the issues of two libraries requiring different versions of the same assembly, as only one can be loaded. For example:

  • Library A depends on v1.1 of Library C
  • Library B depends on Library A
  • Library B depends on v1.2 of Library C

So in this case, Library B wants to use v1.2 of Library C, but its dependency Library A expects v1.1. So NuGet will install v2 of Library C, and in order to reassure Library A that we can satisfy its dependency of Library C it adds a Binding Redirect to say “sorry we don’t have v1.1 of Library C, but we have v1.2 and that should be fine, and you should be good to use it in its place.”

The following binding redirect specifies this:

    <assemblyIdentity name="LibraryC" … >  
        <bindingRedirect oldVersion="" newVersion="" />

Of course there is a risk here that v1.2 of Library C won’t be compatible with Library A, leading to errors at runtime, and that is something only the app developer can verify using all of those integration tests they remembered to write.

Keeping your binding redirects in order

Maintaining all of these binding redirects can become a bit of a problem if you have a large number of projects in your solution. Old redirects will stick around, as NuGet won’t automatically remove them due to the risk of those runtime errors that only testing can confirm.

I’ve found that they can cause headaches with source code merges if your team is maintaining multiple branches.

The NuGet package manager provides a cmdlet Add-BindingRedirect that will add all of the necessary binding redirects to a project, however it won’t remove the old binding redirects that no longer apply.

The following PowerShell run in the Package Manager console will apply this:

PM> Get-Project -All | Add-BindingRedirect 

My solution to refreshing binding redirects

DISCLAIMER: This code is provided with no warranty whatsoever. Any changes to your Binding Redirects should be tested thoroughly.

To go that extra step I implemented a Remove-BindingRedirect cmdlet which will remove the assemblyBinding entries in the Project’s config, and can be included in the PowerShell pipeline before the call to Add-BindingRedirect:

function Remove-BindingRedirect {
        [parameter(Mandatory=$true, ValueFromPipeline=$true)]
    process {
        $ProjectDir = Split-Path $Project.FullName
        $ConfigFileName = $Project.ProjectItems | Where-Object { $_.Name -eq 'web.config' -or $_.Name -eq 'app.config' }
        if ($null -ne $ConfigFileName) {    
            $ConfigPath = Join-Path -Path $ProjectDir -ChildPath $ConfigFileName.Name
            $Xml = [xml](Get-Content $ConfigPath)
            $Ns = @{ ms = "urn:schemas-microsoft-com:asm.v1" }
            $Xml | Select-Xml '//ms:assemblyBinding' -Namespace $Ns | ForEach-Object {
            } | Out-Null
            Write-Host "Removed bindingRedirects from $ConfigPath"
        else {
            Write-Host "Couldn't remove bindingRedirects from $($Project.Name) as couldn't find a config file"
        return $Project

I save this in a file RemoveBindingRedirect.ps1, dot source it within the Package Manager console, and then to refresh all of the bindingRedirects within all of the projects in a solution I run:

PM> . "RemoveBindingRedirect.ps1"
PM> Get-Project -All | Remove-BindingRedirect | Add-BindingRedirect 

Please ensure your configs are version controlled prior to running this. After running for a good few minutes all of the configs in the solution should be refreshed.

Update: Nick Carver at Stack Overflow has written a good post about Binding Redirects here: His overall recommendation is to migrate to .NET Core to avoid these issues, something I agree with.


Configuring Visual Studio as your Git mergetool

Configuring Visual Studio as your Git mergetool can help people familiar with it to resolve conflicts more easily, here I show you how.

The default option for the Git mergetool is vimdiff, which although perfectly fine, will be unfamiliar to a lot of people, particularly those with a .NET development background. For this reason I’ve changed my config to use the vsdiffmerge component of Visual Studio to do my Git diffs and merges.

Visual Studio Code as default editor

First of all, you may want to change the default Git editor to be Visual Studio Code. This will be used for commit messages if you leave off the -m command line switch when calling git commit. So to enable it, run:

git config --global core.editor 'code --wait'

The --wait option will make the parent process wait for us to close Code before continuing.

The reason I suggest doing this first is that we will be using this editor to edit the config to add VS as a mergetool. To go ahead with this, open the global config with:

git config --global -e

Visual Diff Merge as mergetool

To configure the vsdiffmerge utility as your mergetool, add the following sections to your .gitconfig (open in VS Code after the previous command):

    tool = vsdiffmerge
    prompt = true
[mergetool "vsdiffmerge"]
    cmd = \"C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\Common7\\IDE\\vsdiffmerge.exe\" \"$REMOTE\" \"$LOCAL\" \"$BASE\" \"$MERGED\" //m
    keepbackup = false
    trustexistcode = true

You may need to change the cmd, depending on the location of your Visual Studio installation.

Invoking Visual Studio Diff Merge

So next time you perform a merge and it has conflicts, you can start to resolve the conflicts with Visual Studio by entering git mergetool in the conflicted repository. As we specified the prompt = true option you will be prompted to resolve each conflicted file.

Hopefully this post has helped with your Git merging!


Git Submodules vs Git Subtrees

The number one issue I’ve seen when people start using Git is dealing with submodules in existing projects. Recently I’ve been considering moving everything to subtrees, but I don’t see that as a direct replacement. In this post I explain why.

Why use Submodules or Subtrees?

Every organisation has code that is shared between projects, and submodules and subtrees prevent us from duplicating code across those projects, avoiding the many problems that arise if we have multiple versions of the same code.

Subtrees vs Submodules

The simplest way to think of subtrees and submodules is that a subtree is a copy of a repository that is pulled into a parent repository while a submodule is a pointer to a specific commit in another repository.

This difference means that it is trivial to push updates back to a submodule, because we’re just pushing commits back to the original repository that is pointed to, but more complex to push updates back to a subtree, because the parent repository has no knowledge of the origin of the contents of the subtree.

It also means that subtrees are much easier for other people to come and pull, as they are just part of the parent repository.

So an ultra-dumbed-down ELI5 comparison of submodules to subtrees could be:

  • Submodules are easier to push but harder to pull – This is because they are pointers to the original repository
  • Subtrees are easier to pull but harder to push – This is because they are copies of the original repository

I will elaborate on this, so pardon the simplification.

A brief overview of git submodules

Adding a submodule

If I wanted to add a submodule to an existing git repository I’d run something like this:

$ git submodule add lib/awesomelib
Cloning into ‘lib/awesomelib’…
remote: Counting objects: 11, done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 11 (delta 0), reused 11 (delta 0)
Unpacking objects: 100% (11/11), done.
Checking connectivity... done.

If I then ran git status I’d see this:

$ git status
On branch master
Your branch is up-to-date with 'origin/master'.

Changes to be committed:
  (use "git reset HEAD <file>…" to unstage)

    new file:   .gitmodules
    new file:   lib/awesomelib

The .gitmodules file has been created, and it’s contents will be:

[submodule “lib/awesomelib”]
      path = lib/awesomelib
      url =

So the three key consequences of the submodule add are:

  1. The .gitmodules file has been added in the root of the repository, containing the path and URL for the added submodule.
  2. The lib/awesomelib folder now contains a full clone of the repository. With one key difference…
  3. The .git folder for the submodule repository has been added in the .git/modules folder at .git/modules/lib/awesomelib rather than lib/awesomelib/.git. The location lib/awesomelib/.git contains a file with a single line gitdir: ../../.git/modules/lib/awesomelib pointing to the real .git folder (the nested repository’s alternative to a full-blown .git folder).

Both the advantage and disadvantage of submodules is that they can and should be treated as a repository of their own. They will need to be committed to separately, and can be branched separately. The lib/awesomelib directory in the example above should be treated as nothing more than a pointer to a particular SHA-1 in another repository.

You may already be able to see some of the issues that can occur if you ignore the fact that the submodule needs to be kept up to date:

  • Changes to the parent could be committed and pushed without having committed and pushed the changes to the submodule.
  • If a collaborator has modified and pushed changes to a submodule but you haven’t run git submodule update to update the submodule on your machine to their latest version, you may run git add -A and downgrade to your out of date version.

Pulling from a submodule

This is just a case of:

  1. Changing directory to the submodule repository
  2. Pulling from the remote
  3. Moving up again to the root of the parent repository
  4. Committing the pointer to the new HEAD commit of the submodule

Any changes from the last committed submodule commit will be listing as modified, and can be included in the next commit to the parent repository.

Pushing to a submodule

The only difference between making changes to code within a submodule directory and a regular directory is that we must commit and push to the submodule repository before then moving up a directory and committing the pointer to the new submodule commit and pushing that to the remote of the parent repository.

I think this needs a more detailed example, which I’ll start by adding a file to the submodule folder:

$ cd lib/awesomelib
$ touch hello.txt
$ git status
HEAD detached at 2c81f4f
Untracked files:
  (use "git add <file>..." to include in what will be committed)


nothing added to commit but untracked files present (use "git add" to track)

When the contents of a submodule folder have been modified they appear as a single line if we run git status in the parent repository:

$ cd ..
$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)
  (commit or discard the untracked or modified content in submodules)

    modified:   lib/awesomelib (untracked content)

no changes added to commit (use "git add" and/or "git commit -a")

This output from git status can be confusing, because it looks like only a single file has changed, when in fact there could be massive changes within the submodule directory.

If I see a modified submodule directory and I haven’t modified it myself, I tend to run git submodule update to ensure that the checked out code for the submodule is the version it’s expected to be.

If you don’t do that, you are likely to end up committing the the incorrect version of the submodule that is present in your working copy.

As the changes in this example are deliberate, we should commit them, by changing directory to lib/awesomelib to commit our changes, and then pushing them:

$ cd lib/awesomelib
$ git add -A
$ git status
HEAD detached at 2c81f4f
Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)

    new file:   hello.txt
$ git commit -m "Test file."
[detached HEAD 6498362] Test file.
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 hello.txt

Ignore the “detached HEAD”, it’s not perfect, but not relevant to this example.

So I’ve created a new commit in the submodule, but I haven’t yet pushed. If I move up a directory, I will then be back in the parent repository, and I will see that the submodule has a new commit:

$ cd ..
$ git st
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)

    modified:   lib/awesomelib (new commits)

no changes added to commit (use "git add" and/or "git commit -a")

There’s nothing to stop me from committing this change in the parent, even though I haven’t pushed the submodule change to the remote. So I need to make sure that after a submodule commit I also push:

$ git push origin master
Counting objects: 62, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (40/40), done.
Writing objects: 100% (62/62), 11.63 KiB | 0 bytes/s, done.
Total 62 (delta 22), reused 58 (delta 21)

Now I’m safe to commit the submodule change in the parent repository:

$ cd ..
$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)

    modified:   lib/awesomelib (new commits)

no changes added to commit (use "git add" and/or "git commit -a")
$ git add -A
$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)

   modified:   lib/awesomelib
$ git commit -m "Test file."
[master 0297f84] Test file.
 1 file changed, 1 insertion(+), 1 deletion(-)

And push it as normal:

$ git push origin master
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 310 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)

That may seem quite convoluted, but we are dealing with two separate repositories, so there is always going to be twice as much work.

The order in which you commit and push changes when working with submodules is so important that I consider it the golden rule of modifying submodules…

The golden rule of modifying submodules

Always commit and push the submodule changes first, before then committing the submodule change in the parent repository.

As mentioned above, a submodule is nothing but a pointer to a specific commit in an external repository, so how can you possibly commit and push a reference to that pointer if it doesn’t exist on a server somewhere, accessible by everyone’s parent repositories?

Without following this rule you can get into a confusing state in which the parent repository is pointing to a submodule commit that only exists on your local machine. The tooling should warn about this and reject the push, but I haven’t seen it happen yet.

Issues with Submodules

Issues with submodules tend to arise due to the poor tooling. As mentioned, I’ve found that it is necessary to manually run a git submodule update each time I pull updates and find that a submodule has been updated, and it’s also necessary when switching between branches. You can tell if it’s been updated because a clean checkout will say that the submodule has been modified.

If you don’t notice that you need to update the submodule, all it takes is a lazy git add -A or git commit -a and you’ve downgraded the submodule to the version you’ve had in your working copy all along. This stale submodule can cause the entire project to get into a mess.

If you define an alias which runs git submodule update after every single git pull then you will be safe, but a newbie is unlikely to do this.

A brief overview of git subtrees

Adding a subtree

The following call to git subtree will be roughly equivalent to the git submodule command above:

$ git subtree add --prefix lib/awesomelib master --squash
git fetch master
warning: no common commits
remote: Counting objects: 11, done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 11 (delta 0), reused 11 (delta 0)
Unpacking objects: 100% (11/11), done.
Resolving deltas: 100% (7/7), done.
From h
 * branch            master     -> FETCH_HEAD
Added dir ‘lib/awesomelib’

This will clone the remote repository into the lib/awesomelib folder, and create two commits for it.

The first is the squashing down of the entire history of the remote repository that we are cloning:

commit 70a0b8b8e2c76d9bcfd00f8f935d11941d2937d8
Author: Martin Owen <>
Date:   Sat Apr 9 19:50:49 2016 +0100

    Squashed ‘lib/awesomelib/‘ content from commit d3abff6

    git-subtree-dir: lib/awesomelib
    git-subtree-split: d3abff6e5307227858d5323cf8aaf108c542ad2b

A merge commit for it, including the SHA-1 for it in the comment:

commit df09e101ac1bcb1e6d48cb4ab6b28c707b5b0402
Merge: cc78b8d 70a0b8b
Author: Martin Owen <>
Date:   Sat Apr 9 19:50:49 2016 +0100

    Merge commit '70a0b8b8e2c76d9bcfd00f8f935d11941d2937d8' as ‘lib/awesomelib’

If I run git status, I’ll see nothing, as git subtree will have created the commits for me and left the working copy clean. Also there will be nothing in the lib/awesomelib to indicate that the folder ever came from another git repository. And as with submodules, this is both an advantage and a disadvantage.

Pulling from a subtree

Pulling changes from the remote to the subtree isn’t difficult at all, and is very similar to the add:

$ git subtree pull --prefix lib/awesomelib master --squash

You should be able to see that the parameters are exactly the same as the add, we’ve just changed the command to pull. The command will also create a similar set of commits to the earlier add.

So far so good.

Pushing to a subtree

Things get really tricky when we need to push commits back to the original repository. This is understandable because our repository has no knowledge of the original repository, and has to figure out how to prepare the changes so that they can be applied to the remote before it can push.

$ git subtree push --prefix lib/awesomelib master
git push using: master
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 325 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
   2c81f4f..f0a54ff  f0a54ff7151a05ae9408a45daba88164bd4ab8cd -> master

In my experience how long this takes to run depends on the amount of history in the parent repository, your OS, and your machine. I’ve seen it take so long when running the command in a large repository on Windows that I had to give up and go back to using submodules, but I’ve found it to work more quickly on OS X.

The implementation is visible at: and the split command (run as part of a push) is what takes significant time, but I’ve not been able to determine exactly why.

Issues with Subtrees

After so many issues with submodules I had high hopes for subtrees, but was quite disappointed. For a start there is very little documentation. This text file is the best official documentation I’ve found, and everything else I know has come from either Stack Overflow or blog posts.

My other main issue is with the slow push speeds on Windows that I have mentioned, I’ve found it to be so bad that it has made subtrees unviable for me.


In my opinion subtrees are not a direct replacement for submodules. The way I believe you should split your shared code between subtrees and submodules is this:

  • Is the external repository something you own yourself and are likely to push code back to? Then use a submodule. This gives you the quickest and easiest way for you to push your changes back.
  • Is the external repository third party code that you are unlikely to push anything back to? Then use a subtree. This gives the advantage of not having to give people permissions to an extra repo when you are giving them access to the code base, and also reduces the chance that someone will forget to run a git submodule update.

If you think I’m a complete idiot who has totally misunderstood and misrepresented submodules or subtrees, please let me know in the comments.


Blogging Tips

Blogging is easy to start but hard to maintain so you need to make sure you’re clear on why you’re doing it, and how to make it both enjoyable and worthwhile. That’s why I’ve put this post together, as a reference for myself and hopefully someone else.

Most of the things I’ve learned about blogging are from either John Sonmez (he has a great blogging email course and he’s a massive advocate of programmer blogging) or the Pragmatic Programmers book Technical Blogging. I highly recommend checking out both.

Why are you blogging?

Learning by teaching

I’m blogging primarily so that I can learn. If you want to really learn something you should teach it, something I’ve realised myself over time and seen repeated by experts regularly.

Planning to write a public blog post about a subject is an incentive to make you understand it to such a degree that you can write about it without sounding like an idiot.

These blog posts also act as notes on a particular topic, helping me organise my thoughts, and will be there for me in future. They make my notes more formal, and invite corrections from commenters.

Learning to write

Another reason I’m writing is to improve, particularly with my technical writing. Even if I’m not proud of any of my blog posts, I will be proud of the fact that I spent time expressing my own opinions rather than consuming someone else’s via yet another article/video/podcast elsewhere.

Communicating with like minded people

Plenty of blogs are there to either stake a claim within a niche or just start a dialogue with people within it. This is vitally important if the niche is only just emerging, and if you want to help it develop and become an authority within it.

Be consistent but also realistic

Setting a regular blogging schedule is an incentive to have something to write about, and this is in turn an incentive to build up knowledge on topics. This can only be a good thing.

I find technical posts more time consuming than other posts, as they need more background research and fact checking, so I alternate between technical and non-technical when posting to give me the best chance of hitting a posting schedule.

Consistency beats quality (because quality will come over time)

Of course it is better to just put content out there consistently, as it will help your writing evolve, and help find the niche that inspires your best writing.

If only one in every five posts is any good it’s not a problem, because I still have a good post. Each of the five posts should be considered part of the journey to that one good post. And it’s also beneficial from an SEO perspective.

Don’t give in to resistance

John Sonmez’s book Soft Skills for Programmers led me to the book The War of Art which is about the subject of procrastination, something you will know about if you’ve ever tried to write about anything. It’s a great book on our own internal resistance to creative endeavours, and emphasises that the key is to understand that there will constantly be points in which you will want to stop writing, and it’s essential to carry on.

Find a word processor that you love to use

I’m writing this post using iA Writer. It is very enjoyable to use. I’m a great believer that if something is fun you will be inclined to do it. Every extra sentence I write in Writer is fun. I paid £8 for it and it’s worth every penny as it means I actually write.

When I used to edit posts in Emacs it wasn’t nice as Writer, and crucially it wasn’t mobile. As with the WordPress mobile app, your editor should allow you to write both on a mobile app and a desktop app, or you just aren’t going to capture the thoughts you have when away from your desk.

Keep the writing simple

There is an excellent short blog post by Dilbert creator Scott Adams entitled The Day You Became A Better Writer. In it he talks about a one day business writing course he attended and his it taught him to keep his writing simple. The advice is short and sweet, and should be referred to often.

Just use WordPress

On the technical side of things, be sure to use WordPress as your blogging platform. Seriously. As with many programmers I’ve spent a lot of time with static blog engines in the past and they just can’t compete with what WordPress has to offer.

With static engines, I will spend way too much time tweaking and not enough time actually writing content. I used to think I was so smart and efficient with my static site that I’d edit in Emacs, commit to Git, and push to GitHub pages but literally all I was doing was tweaking the layout.

Programmers feel that if they’re not hacking together their own static engine and layout then they’ll be seen as failures, but if you’re not getting traffic because you’re not writing consistently enough or your content isn’t optimised (both due to reasons I’ll elaborate on below) then your publishing process is irrelevant.

It’s all about the SEO

If you don’t use the excellent SEO plugins available for WordPress either you’ve spent a lot of time developing SEO skills (or want to develop them) and can apply them to your own site, or you just don’t care about SEO.

If you don’t care about it then you’re going to miss out on so much traffic, and should be questioning why you’re blogging in the first place.

It now has an awesome mobile app

You can’t argue with the WP mobile app. I don’t need to be at a computer to post, or publish, or organise my blog. This is an enormous advantage over a static site.

You may argue that services like Medium and Ghost have their own apps that are also very well developed, but I am only interested in platforms that allow me to self host. If I’m going to invest time in content I don’t want to have to migrate it at short notice a few years down the line (anyone remember Posterous?) And I know Ghost allows self-hosting, I’m just making a point! 😀

Nowadays swiping the keyboard on my phone is actually better than typing on the keyboard on my computer. I can of course write a post up in a note taking app and then import it into WordPress later, but why not just write it directly?

Principles of my blogging

So in summary I’d say my blogging principles are now:

  • Teaching others in order to teach myself
  • Making it fun in order to stay consistent
  • Using WordPress to give me great SEO out of the box
  • Keeping the writing simple and not giving in to resistance

Tech Interviews – An Interviewer’s Perspective

Note this post is written from the perspective of a software developer hiring other software developers. I imagine a manager hiring a software developer will have a different perspective.

The more interviewing of developers that I do the more it opens my eyes to how an interview impacts the interviewer, and I wanted to share here.

The Interviewer Desperately Wants to Hire Someone

Something that interviewees often don’t realise is that in most cases your interviewer wants to be done with the current round of interviews as soon as possible. They want nothing more than for the person sat in front of them to be The One, because it will get them back to doing their actual job.

The person interviewing is probably super busy as there are only two logical reasons for hiring:

  1. The developers have too much work (and depending on the company the “too much work” threshold could be really high).
  2. One of the developers is leaving or has already left (and this could potentially be because of issues caused by being overworked).

If it is 1) then you’ve probably prepared for the interview more than they have. You only have to attend one interview, they have to attend many. Until I started interviewing I didn’t appreciate this enough. There is a decent chance that your interviewer hasn’t prepared at all and is just dusting off an old set of questions.

If it is 2) then they are probably demoralised and trying hard not to show it. They will certainly be trying hard not to show how busy they are, otherwise you are going to reject the job or demand a salary that puts you out of reach.

Interviewing Is More Time Consuming Than the Candidate May Realise

When you’re on the other side of the fence you can fail to realise that recruiting is a very time-consuming process.

  • Calls with recruiters (very talkative people who are hard to get off the phone, and are likely to call often).
  • Preparation for the interview. Even if an interview is only going to take an hour, paperwork must be printed off, laptops must be prepared for code tests, and interviewers will have to wait around for the candidate to arrive. This distracts from real work.
  • Debriefing. Good notes are necessary if you are going to be able to properly review an interview and make the best decision. These take time, unless you’ve written the bulk of them during the interview (even if you have they are likely to need summarising for management).

The Interviewer’s Manager

As well as the interviewer’s own wish to get back to productive work, there is also likely to be a manager wanting the interviewer to find a candidate as quickly as possible for the same reason. If someone is leaving they want to be able to tell their own boss ASAP that everything is back under control.

This can lead to varying levels of desperation on behalf of the interviewer depending on how long the round of interviews have been underway.

The Candidate

If you have an active Stack Overflow or GitHub account you will not only be ahead of the other candidates, but the interviewer will love you as you are making their life much easier. Seriously, even a shitty project you threw together on a Saturday afternoon is better than a gaping void.

This could backfire if the interviewer sees something atypical (or out of date) in your project that they don’t like, but I still think that any code is better than no code. A complete lack of an online presence for a developer is slightly suspicious and could stop you from even being invited to an interview.

If your online presence leaves you in a positive light there is a good chance it will cut the time required for the interview, and this is positive for all involved.

When Being Interviewed, Consider the Interviewers

So when attending technical interviews in future, enter the interview understanding that the interviewer really wants to hire you. Be respectful of their time, as they are likely to not have much of it.


Avoiding distractions as a programmer

Distractions are kryptonite for programmers. They should be avoided at all costs. Reducing your distractions is key to productive software development. As a programmer you should be aiming to spend as much time as possible in a state of uninterrupted flow.

Distractions lead to insecurity

As you become more senior in your career you may think that you have become less of a programmer, that you lack skills you had earlier in your career. What is more likely to be the case is that you simply have far more distractions.

We all remember how we were during the first few weeks of a new job, when our email inbox was empty, nobody walked over to ask us questions, there were no random IMs popping up, and no urgent escalations.

It’s simply a fact that more time you spend in a particular position at a single company, the more knowledge you will acquire and the more people within the company will consult you to share that knowledge.

It’s very easy to fall into the trap of responding to all requests for time as they come in, this is fine for the people requesting your time, but it is terrible for you, and ultimately your employer.

Permission to waste time

Distractions occur when you allow yourself to permission to waste time. This happens when you aren’t clear on the work that you should be focussing on at a given moment, the moment an interruption or distraction comes in.

The Pomodoro Technique can take away this permission to waste time. During a 25 minute pomodoro all interruptions and distractions should be deferred, and it is clear what work is being focussed on. Very few distractions are so important that they can’t wait until the pomodoro is complete.

Permission to relax

The Pomodoro Technique also gives us permission to stop working, because there has to be a short break between every pomodoro. This gives time to review the work that has been completed, so that planning changes can be made. Loss of focus is prevented with these regular breaks.

It’s recommended that you actually leave your desk for this kind of relaxation, and at least leave your keyboard, because your brain doesn’t really rest fully while left staring at your computer screen.

Logging time to identify time sinks

I’ve been using to start the day with a plan of what I will be working on (including all meetings and conference calls that I have scheduled) and then it is clear to me that I don’t have permission to waste any time if I want to achieve what I have committed to.

When you actually see in advance how many calls you are attending it becomes clear how much time is going to be lost to them, and you can make an informed decision about which ones you really should be ducking out of.

Kanbanflow also logs where my time was spent so that I can review how I’m spending my time and adjust to deal with any areas of work that I’m neglecting. In the past I’ve gone for long periods knowing that I’m wasting time, but not being specific about what I was wasting it on.

Making time for work that is important but not urgent

If we’re not constantly following distractions, then we can make time for work that will improve our productivity, but isn’t required urgently and therefore typically doesn’t get done. These are what productivity author Steven Covey referred to as Quadrant 2 Activities.

These tasks are key to really improving out productivity, and they are so easy to leave out if we aren’t in control of what we’re working on each day.

In summary

Giving yourself permission to ignore distractions is key to being productive. Use the Pomodoro Technique to defer distractions until you are finished with the task that you are currently working on.


Should I Write Tests?

I often see this question sometimes gets asked in various forums on-line, and everyone rushes in to say “yes of course you should” and although I do agree, I don’t think automated testing should just be blindly included in every piece of work. So I wanted to describe the scenarios where automated testing really is beneficial.

  1. Complex distributed logic that it is impossible to get your head around quickly, particularly after a long time away from the code
  2. Complex isolated logic that has so many permutations that it is hard to cover all of them with a manual test
  3. Logic that is dependent on scenarios that are difficult to reproduce with manual testing
  4. Logic on which we depend but don’t control (third-party packages or APIs)

Complex Distributed Logic

This is the kind of logic that has multiple moving parts, distributed as separate services within a whole solution, and a change to one part can inadvertently bring a large part of the application down.

The testing will be in the form of high-level integration tests, either because unit test coverage isn’t good enough, or we haven’t had enough time to isolate mock scenarios but have had time to generate fake data for them (pretty much the same as not having enough unit test coverage).

This kind of automated integration testing stops development (and likewise refactoring) from grinding to a halt when it’s impossible to get a run-through of the entire application into your head at one time.

Complex Isolated logic

Sometimes a change, particularly a bug fix, appears on the surface to be simple but in reality has so many permutations that it is difficult to pin down all the scenarios it has to support. Automated testing is invaluable here, and can be the difference between a successful deployment and an immediate rollback.

I’ve been in scenarios where QA is waiting bug fixes to be deployed to their test environment and their tests couldn’t be allowed to fail due to an upcoming release window. I had time constraints of my own (often needing to complete fixes within a matter of hours) and without unit tests it would be impossible to develop quickly and be confident that the fixes would work.

This is like the unit testing equivalent of the integration testing of distributed logic above.

Hard to Reproduce Test Scenarios

If you’ve ever done work across time zones you’ll know that it’s unfeasible to manually test an application by changing the timezone of the local machine’s clock and running through a test script. The only way to really test this kind of thing is by injecting a system clock into your code, and faking an instance of it for your tests.

This also applies to code that tests various permutations of asynchronous result handling. It’s impossible to manually reproduce results being returned in certain orders and after certain times, without faking it in a test.

There is some overlap here with 2. Complex Isolated Logic, as the hard to reproduce scenarios often pin down complex logic.

Uncontrolled Logic

Being able to fake a third-party is incredibly useful. We can make our assumptions explicit in our mocking code, start to build before new third-party functionality is available, fake exceptional behaviour, and make expensive API calls without incurring a cost. All of this often makes a good third-party mock well worth the development effort.

Building a mock of a third-party is often a no-brainer as our automated tests can’t be run against a live API. We can sometimes take an approach that it isn’t our concern whether a third-party API works as expected, as we can always raise a ticket if it doesn’t, but an accurate mock saves us from any last-minute surprises.

This ties in with 3. Hard to Reproduce Scenarios, as it we can use out fake third-party to reproduce error conditions which we could never trigger against a real API.


Of course after writing this all out I’ve come to the conclusion that at least one of the points above is likely to be satisfied in most non-trivial applications sooner or later, meaning that tests will become essential sooner or later.


Client Side Package Management in Visual Studio 2015

If like me you’ve always had one foot in the open source development camp, then you’ll be really pleased by the recent changes in ASP.NET 5. Microsoft have stopped reinventing the wheel and accepted that the existing open source tools for client-side package management should be integrated into Visual Studio.

Gulp, Grunt, Bower, NPM – what’s the difference exactly?

I’ll start with a summary:

  • NPM, is the package manager that installs the other package managers discussed in this post, as they all run on node.js locally.
  • Gulp and Grunt are both task runners running on the node.js runtime, and their main functions are to pre-process and/or bundle our client side JavaScript and CSS.
  • Bower is a package manager for all the HTML, JavaScript, CSS, fonts, and images that are bundled with a modern UI package or framework.

The NPM and Bower package managers are smart enough to resolve all the dependencies required by a package, and make sure that we only download a single instance of a given dependency.


NPM is a JavaScript package manager, and became the standard package manager for node.js a number of years ago. Every NPM package comes with a package.json file which has details of the package’s current version, dependencies, contact info and documentation, and scripts that should be run at specific points in its life-cycle.


ASP.NET Gulp Docs

Gulp is so named as it is based on piping streams through multiple commands until all the commands are complete. This piping means that Gulp takes more advantage of the asynchronous nature of node.js, and can give better performance.

The standard ASP.NET 5 project templates use Gulp as the default task runner, so if you create an ASP.NET 5 project with Visual Studio, Gulp will be available straight away. If you right-click on the gulpfile.js in the Solution Explorer and then click the Task Runner Explorer, then you will be able to see all the individual tasks that are defined.

Gulp Tasks

Tasks are defined using a JavaScript function, and can have dependent tasks specified as an array of strings of existing task names. If tasks are going to take a long time to run it can be good to split them up if we only want to run one.

Gulp Modules

Tasks run code from modules that are required by the gulpfile to do things such as cleaning out your build directory. You would do this by requiring the rimraf module and then calling it within a “clean” task, passing your build directory in as a parameter.

Gulp modules are installed using NPM.


ASP.NET Grunt Docs

Grunt is a Task Runner similar to Gulp, and also has integration in Visual Studio 2015. It takes more of a declarative approach to defining tasks: you require already available Grunt tasks, and specify parameters for them by using JSON. I won’t go into as much detail on Grunt here as I’m planning to stick with Gulp for task running in future.


ASP.NET Bower Docs

Bower is a package manager for client-side code, it was created by the team behind Bootstrap to give people a standard way of obtaining updates to it. We require so much client side code from so many sources, each with their own dependencies, that it has become too much of a handful to just commit random snippets into version control and expect ourselves to manually keep everything up to date.

You can think of Bower as NuGet for the static third-party code that your web application requires. Rather than downloading packages from the web, including possibly resolving dependencies manually, it will take care of downloading everything we need for a particular package.

If you have experience of working on an application with a decent amount of JavaScript, then you will know that formally managing your third-party JavaScript, and the dependencies that it brings with it, really pays off in the long-term.


ASP.NET Yeoman Docs

Yeoman is like Bower except it just generates projects from templates. It does the same job as the project templates that already exist within Visual Studio, so I’m not going to go into too much detail on it in this post.