Categories
geek microsoft programming software tips

test Autofac registrations in a .NET unit test

Photo by Fotis Fotopoulos on Unsplash

Since .NET 4.x isn’t quite dead yet, here’s a tip from the glory days of ASP.NET unit testing with NUnit (or whatever you like).
What if you are working on this project with lots of IOC Autofac registration modules? When you refactor code, and add some new dependencies to a class, it’s easy to sometimes forget to add those new registrations to your module.

At the end your code is finished and unit tested, and you think you’re done, until you run the app and *bam*. A runtime error occurs because you’re missing an IOC registration. This gets even sillier when you have different projects with slightly different IOC registrations, so it might end up working on one project, but not on another.

So how do you go about testing such a thing?
The trick is to create a unit test that registers the same IOC dependencies as you do in your project(s), and then instantiate your controllers (or other classes) using the IOC framework. This will start throwing exceptions if you’ve forgotten to register the new dependency, and you will catch it as soon as the tests run. Sweet test automation indeed.

The tricky bit is that your tests are not running inside a web request, which handles the lifetime scope of your IOC objects. So you’ll have to fake a request scope yourself in your test to be able to construct all necessary objects.
Luckily, it’s quite easy to do this in your unit tests.

Here’s an example of what that could look like, using NUnit. See the comments for more details.

public class AutofacMvcControllerResolvingTest
{
    // The Autofac lifetime scope, which keeps tracks of the created objects.
    protected ILifetimeScope Scope { get; set; }

    [SetUp]
    public void Setup()
    {
        // Here we setup our IOC container, by adding 
        // all modules you normally use in your project.
        Builder = new ContainerBuilder();
        Builder.RegisterModule<YourAutofacModule>();
        var container = Builder.Build();
        // Now we create our scope.
        Scope = container.BeginLifetimeScope("httpRequest");
    }

    [TearDown]
    public void TearDown()
    {
        // Cleanup the scope at the end of the test run.
        Scope.Dispose();
    }

    // The TestCase NUnit attribute is handy to list a 
    // number of values which will be passed into the test.
    // That way you only need to write one test, instead of 
    // a test per class type.
    [Test]
    [TestCase(typeof(FooController))]
    [TestCase(typeof(BarController))]
    public void Test_If_Controller_Resolves(Type controllerType)
    {
        // We create the given type using the IOC scope.
        var controller = Scope.Resolve(controllerType);
        // Assert it isn't null, although if your registrations are wrong, 
        // the above line will thrown an exception.
        Assert.IsNotNull(controller);
    }
}
Categories
geek programming security tips

using dependabot without uploading code to GitHub

Photo by Aideal Hwa on Unsplash

GitHub has a cool feature called Dependabot. It automatically checks any repositories for potential security problems with the dependencies it’s using. For .NET projects, that means it will check if you have any NuGet project references that should be updated, because of security issues.
This is all great and awesome, but what if you have this huge in-house project that isn’t on GitHub, and you would like to run Dependabot on it?
Uploading the whole codebase to GitHub is one option, but that might not be what you want to do, or are even allowed to do. If anything, you don’t want to get involved with the legal department, right?

Well, there is a little hack you can try. Dependabot simply checks the packages listed in packages.config of your projects, so if you create a new .NET project, and add all dependencies of your big project to the new project’s packages.config, you are set.

If you have a lot of projects, you can use this PowerShell script to merge all packages.config files in your sub folders into a single one.
Paste the lines below into a .ps1 script, and run it from your project’s root folder to merge all packages.config files.

# Get all package lines from the packages.config files
$lines = ls packages.config -r | get-content | where { $_ -like "*<package id=*" } | sort | unique
# Group them by package, and take only the first entry per package, to avoid the same package being listed with different version numbers. The lowest version will the the first.
$lines = $lines | % { new-object -type psobject -property @{ package=($_ -split '"')[1]; line=$_ } } | group -property package | % { $_.group[0].line }

# Write the merged packages.config file.
'<?xml version="1.0" encoding="utf-8"?>',
'<packages>',
$lines,
'</packages>' | set-content all-packages.config

Afterwards, copy the all-packages.config file over the package.config of your new project, and upload that to GitHub.
Then configure Dependabot on your fresh repository (see Settings > Security & Analysis), and pretty soon you’ll be getting a report on any potential issues with the packages you’re using.

I’m sure the same trick can be used for other types of projects, like JavaScript and Python, as long as you have some sort of configuration file that lists the package you are using.

Categories
microsoft programming software tips

.net core: could not load file or assembly runtime error

Shortly after posting about my tips to fix broken builds, I was in for another bug hunt that took a few hours to resolve. This time it wasn’t the build itself that failed, but the package that came out of it. For some reason, the application crashed instantly on a missing DLL error you know and hate in .NET.
Usually it means a Nuget package conflict. Cleaning the build sometimes helps, but not in this case. We are talking about system DLL’s here, so you know you’re in trouble when that happens.

I was getting the following exception at runtime on a .NET Core app. It wasn’t always the same DLL. But every time it was a system DLL like System.Collections, System.Collection.Generic, System.Linq, etc.

Unhandled exception. System.IO.FileLoadException: Could not load file or assembly 'System.Collections, Version=4.1.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. The located assembly's manifest definition does not match the assembly reference. (0x80131040)
File name: 'System.Collections, Version=4.1.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
   at Microsoft.Extensions.DependencyInjection.ServiceCollection..ctor()

When checking the package files, the wrong version of the DLL was in the output, compared to the version number I was seeing in the solution, so the error message was correct. The problem was, why does the build pick the wrong DLL version? My local build output was fine however, so that just adds to the mystery.

The special bit about this project is that it also used a .NET Framework DLL. It was doing some .NET Standard magic in the build.

The symptoms

  • The project has a mix of .NET Core / .NET Standard / .NET Framework projects or Nuget packages.
  • You are building a self contained application (so all DLL’s are copied in the output folder).
  • Seemingly random 0x80131040 errors when I try to run my deployed .NET Core app on system DLL’s.
  • Different builds give errors on different DLL’s, but they are always system DLL’s.
  • A local dotnet publish works, but the output from the build server doesn’t run.

The fix

The fix turned out to be stupefyingly simple. I was build the .sln file for my project in the CI pipeline, instead of the .csproj file for the console application itself.
Apparently things go wrong at build time and the wrong DLL’s are copied into your output folder, giving you the exceptions above on startup.
The reason why the same dotnet publish command works locally is probably because I have version 5 of the dotnet tool on my local machine, due to Visual Studio. On the build server version 3.1 is installed, from the 3.1 SDK. My guess is that the 3.1 version handles mixed solutions differently than the dotnet 5 version.

Categories
programming software tips tools

tips on fixing a broken build

When builds break on the build server, it can be hard to figure out what the problem is. I’ve been doing this a lot of times, and so here’s a collection of tips that can help you in fixing that weird error that keeps breaking the build. Build systems are pretty similar, so whether you’re using TeamCity, Azure Devops, Gitlab, Jenkins, Circle CI or whatever, these should pretty much work everywhere.

Make sure it builds locally.

This has to be the “Did you try turning it on and off again?” of build fixing. Sometimes that build really is broken. A file might not have been checked in, or a merge might have gone wrong, but what you have on your local machine, might not be what’s in source control.
Get a clean copy of your repo in a separate folder, and build that to check if it builds OK. At least you’ll be 100% sure it’s nothing trivial before you start your investigation.

Try a clean build.

Another “try turning it on and off again” tip, I guess. On most build servers, you can opt to run a clean build. This means the agent will set up a fresh checkout folder for the new build. This way you make sure no leftovers from a previous build are causing the problem. Sometimes things go wrong with source control and getting the latest version, messing with your build. This also fixes that issue.

Does it fail everywhere?

Sometimes build agents are not installed in exactly the same way, and builds might break on a single or a few build agents only. This is good. Now you know you can always get a build on a good agent, so you aren’t stuck when you want to produce some build artifacts, and you know there’s something on that machine that’s not quite right where it’s failing.
The error will usually give you a clue on what’s missing, but most of the time you’ll only understand why after you resolved the issue. Ah, the joys of software development.
My tip here is to compare the machines and see what build tools are missing or different from the one where it does work. Make sure your SDK and build tool versions are the same, and can be found. Check your shell versions, like PowerShell and if they are running in x32 or x64 mode, and if all PS modules are installed in the mode you’re using.
If you have an automated way to quickly produce a build agent, you might want to ditch the failing one, in favor of a brand new one if your build works there.

Check the logs.

It sounds trivial, but the log files of your build server contain a lot of details you’re not looking at when things are working. Now that things are broken, it’s time to dig in and see if you can find a few clues. Most build systems also have settings to change how detailed those logs are. Setting this to verbose can be what you need to find that final clue to solve the puzzle.

Get into the machine.

When things should work, but appear broken on the build server, you’ll sometimes have to dial into the build agent and see what things look like in there. Paths might not be what is expected, files might actually be missing, and tools might be different. There you can open a shell, check if anything is missing, check if build tools are available, or try and run the same command that’s failing yourself. This doesn’t always work as build systems sometimes wrap your SDK calls, and you’re not always sure you’re running the same statement, but it might give you a hint of what’s going wrong anyway.
Look around on the machine, and you might find the problem. Can’t access the machine? Try the next tip.

Get into the machine, without getting into the machine.

You might not be able to access the build agent itself. Pesky admins, or it might even be a cloud machine you don’t own. Luckily you can edit the build itself right? In most cases, you can run some arbitrary shell commands as a build step for whatever fancy stuff that isn’t covered by default.
Well, nothing is stopping you from running an ls or type nuget.config command to find out what is really happening during your build. The output of those commands will normally appear in your build logs. You might have to check the more detailed full log files, but it should be there. This can be a tedious process, but it saved my butt a few times when things didn’t work for an at the time unknown reason.

Categories
programming tips tools vim

base16-vim has all the vim themes you’ll ever need

You think that’s a bold statement? Hear me out. Vim themes are great, but you (and most certainly I) spend way too much time downloading and trying out all these fancy new themes, right? At some point I came across this base16-vim plugin which has a shit-load of cool themes, all using a base of 16 colors. I haven’t looked back since. Well, maybe once of twice, but still, I’m sticking to it.

Base16-vim contains 128 different themes, and it has all the popular ones like the github theme, solarized (dark & light), monokai, gruvbox and many more. To avoid spending too much time trying them all out, here are some examples of the ones I like the best. If you want to try them in a browser, check out the previews to find the perfect one for your setup.

Be sure to check out the geeky base16-greenscreen theme, it’s awesome.