Categories
geek microsoft programming software tips

test Autofac registrations in a .NET unit test

Photo by Fotis Fotopoulos on Unsplash

Since .NET 4.x isn’t quite dead yet, here’s a tip from the glory days of ASP.NET unit testing with NUnit (or whatever you like).
What if you are working on this project with lots of IOC Autofac registration modules? When you refactor code, and add some new dependencies to a class, it’s easy to sometimes forget to add those new registrations to your module.

At the end your code is finished and unit tested, and you think you’re done, until you run the app and *bam*. A runtime error occurs because you’re missing an IOC registration. This gets even sillier when you have different projects with slightly different IOC registrations, so it might end up working on one project, but not on another.

So how do you go about testing such a thing?
The trick is to create a unit test that registers the same IOC dependencies as you do in your project(s), and then instantiate your controllers (or other classes) using the IOC framework. This will start throwing exceptions if you’ve forgotten to register the new dependency, and you will catch it as soon as the tests run. Sweet test automation indeed.

The tricky bit is that your tests are not running inside a web request, which handles the lifetime scope of your IOC objects. So you’ll have to fake a request scope yourself in your test to be able to construct all necessary objects.
Luckily, it’s quite easy to do this in your unit tests.

Here’s an example of what that could look like, using NUnit. See the comments for more details.

public class AutofacMvcControllerResolvingTest
{
    // The Autofac lifetime scope, which keeps tracks of the created objects.
    protected ILifetimeScope Scope { get; set; }

    [SetUp]
    public void Setup()
    {
        // Here we setup our IOC container, by adding 
        // all modules you normally use in your project.
        Builder = new ContainerBuilder();
        Builder.RegisterModule<YourAutofacModule>();
        var container = Builder.Build();
        // Now we create our scope.
        Scope = container.BeginLifetimeScope("httpRequest");
    }

    [TearDown]
    public void TearDown()
    {
        // Cleanup the scope at the end of the test run.
        Scope.Dispose();
    }

    // The TestCase NUnit attribute is handy to list a 
    // number of values which will be passed into the test.
    // That way you only need to write one test, instead of 
    // a test per class type.
    [Test]
    [TestCase(typeof(FooController))]
    [TestCase(typeof(BarController))]
    public void Test_If_Controller_Resolves(Type controllerType)
    {
        // We create the given type using the IOC scope.
        var controller = Scope.Resolve(controllerType);
        // Assert it isn't null, although if your registrations are wrong, 
        // the above line will thrown an exception.
        Assert.IsNotNull(controller);
    }
}
Categories
microsoft programming software tips

.net core: could not load file or assembly runtime error

Shortly after posting about my tips to fix broken builds, I was in for another bug hunt that took a few hours to resolve. This time it wasn’t the build itself that failed, but the package that came out of it. For some reason, the application crashed instantly on a missing DLL error you know and hate in .NET.
Usually it means a Nuget package conflict. Cleaning the build sometimes helps, but not in this case. We are talking about system DLL’s here, so you know you’re in trouble when that happens.

I was getting the following exception at runtime on a .NET Core app. It wasn’t always the same DLL. But every time it was a system DLL like System.Collections, System.Collection.Generic, System.Linq, etc.

Unhandled exception. System.IO.FileLoadException: Could not load file or assembly 'System.Collections, Version=4.1.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. The located assembly's manifest definition does not match the assembly reference. (0x80131040)
File name: 'System.Collections, Version=4.1.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
   at Microsoft.Extensions.DependencyInjection.ServiceCollection..ctor()

When checking the package files, the wrong version of the DLL was in the output, compared to the version number I was seeing in the solution, so the error message was correct. The problem was, why does the build pick the wrong DLL version? My local build output was fine however, so that just adds to the mystery.

The special bit about this project is that it also used a .NET Framework DLL. It was doing some .NET Standard magic in the build.

The symptoms

  • The project has a mix of .NET Core / .NET Standard / .NET Framework projects or Nuget packages.
  • You are building a self contained application (so all DLL’s are copied in the output folder).
  • Seemingly random 0x80131040 errors when I try to run my deployed .NET Core app on system DLL’s.
  • Different builds give errors on different DLL’s, but they are always system DLL’s.
  • A local dotnet publish works, but the output from the build server doesn’t run.

The fix

The fix turned out to be stupefyingly simple. I was build the .sln file for my project in the CI pipeline, instead of the .csproj file for the console application itself.
Apparently things go wrong at build time and the wrong DLL’s are copied into your output folder, giving you the exceptions above on startup.
The reason why the same dotnet publish command works locally is probably because I have version 5 of the dotnet tool on my local machine, due to Visual Studio. On the build server version 3.1 is installed, from the 3.1 SDK. My guess is that the 3.1 version handles mixed solutions differently than the dotnet 5 version.

Categories
microsoft programming tips tools

extend visual studio with simple scripts

There’s this neat feature in Visual Studio called External Tools that’s underrated. What it does is allow you to run any external tool and pass in stuff from Visual Studio and do something cool with that. For example the current file in your editor, or the currently selected text.

This means that you can write a PowerShell script that gets some input from the command line, and for example edit a file (the current file in VS), or search for a substring (selected text) in other files, or a database.

I’ve used this to do a complex number of find-replaces on the current file in a single run. I also use this to quickly find the source code of a stored procedure that’s used in code, by selecting the stored procedure’s name and running the script.

How do you set this up?
In Visual Studio, in the menu click Tools, then External Tools.
There you click Add and give the thing a title.
The command can be a bit tricky, but the easiest thing is to use a .cmd script, like test.cmd.


Then you create this cmd file, and make sure it’s in your path. You can also use the full path to the script for your command instead.
Now pick your arguments from the arrow at the right of the Arguments input box. I’m using ItemFileName here, but there’s plenty of options.

You’re all set.

As a test script you can use this gem:

@echo off
echo ** Command line parameters **
echo %*
pause

If you want to wrap a more advanced PowerShell script, you can use this:

@powershell.exe -nologo -noprofile -file c:\tools\do-something-awesome.ps1 %*

If you run this from a file now, you’ll see the passed in parameters in the script’s output. Like this:

With this technique you can easily and quickly extend your Visual Studio setup to do some mundane tasks a lot faster, using PowerShell, Node.js, Python or whatever your favorite scripting language is.

Categories
geek microsoft programming software tips

publish a static website to Azure using GitHub actions

Last post I talked about setting up a serverless website on Azure using BLOB storage. What I didn’t go into is how to publish files to that site automatically. Yes, you can use the BLOB explorer to manually upload your files but seriously, who wants to do that kind of boring task if you can let computers do that for you.

Instead, what I do to publish this excellent programming guidelines website is the following:

  • I make my changes locally and test it.
  • I commit & push my changes to the master branch of my git repository.
  • A GitHub action kicks in and published the site to the Azure BLOB container.

How sweet is that? Pretty sweet, I know. How do you set this up? Well let me take you through the steps my friend, and automated Git deployment will soon be yours to enjoy as well.

  • You need to create a Git repository on GitHub.
    Now that you can create unlimited private repositories, you don’t even have to expose it to the public, which is nice.
  • Clone the repo locally, and create a source/ directory in it. This is where the source code will go, and that’s what we’ll push to the Azure BLOB container. Any other files you don’t want published go in the root, or in other folders in the root of your repository.
  • Copy your source code into the source/ folder, or create a simple index.html file for testing the publishing action.
  • Go to your repository page on the GitHub site, and click the Actions tab at the top.
  • Click New Workflow, choose “set up a workflow yourself”.
  • It will now create a YAML file for you containing your workflow code.
  • Paste the content for your YAML file listed below. Notice the “source” folder in there? That indicates what folder will be copied to Azure.
    In case you run into trouble, you can dig in to the Azure Storage Action setup yourself, but it should do the trick.
on: [push]
jobs:
  build:
    runs-on: ubuntu-latest
    steps: 
    - uses: actions/checkout@v1
    - uses: actions/setup-dotnet@v1
      with:
        dotnet-version: '3.0.100'
    - uses: lauchacarro/Azure-Storage-Action@v1.0
      with:
        enabled-static-website: 'true'
        folder: 'source'
        index-document: 'index.html'
        error-document: '404.html' 
        connection-string: ${{ secrets.CONNECTION_STRING }}
  • Last step is to set up that CONNECTION_STRING secret. This is the connection string to your Azure storage container. You can set the secret from your GitHub repository Settings tab, under Secrets.
    Click New Secret, then use the name CONNECTION_STRING and paste the access key value from your Azure storage account.

You’re all set up!
To test your publishing flow, all you need to do now is push a commit to your master branch, and see the GitHub action kick in and do its thing.
You should see your site appear in a few seconds. Sweet!

Update: recently I found out the workflow broke because of a bug in the latest version of the action. To bypass this I now fixed the version in my workflow YAML file to v1.0, which still works. It’s probably a good idea to avoid this kind of breaking changes by fixing the version of any action you use in your GitHub actions anyway. It will avoid those annoying issues where things work one day, and don’t the next.

Categories
hosting internet microsoft programming software

how to host a serverless static website on azure

For my little gfpg project I wanted to put a simple static website online without having to set up and maintain a web server. I read about going serverless with a static site using S3 on AWS, but I wanted to try that on Azure instead. BLOB storage seemed the obvious alternative to S3, but it took some searching around and finding the right documentation on MSDN to get it all up and running.

If you’re on a similar quest to publish some static content to Azure BLOB storage as a serverless website, this short guide will help you along.

  1. First of all we need to create an Azure BLOB storage account for the site. The most important part is to choose a general-purpose v2 Standard storage account, for the account kind. This is the only type that supports hosting a static website. Guess who didn’t do that.
  2. Next thing is to enable static hosting of your files. This will create a $web folder in your storage account, which will be the root folder of your website. It’s that simple.
  3. Copy your files into the $web folder using the Storage explorer blade in the Storage account menu, or the Storage explorer app. You can already test your site using the Azure endpoint.
The Storage explorer is a quick and easy way to upload and manage your files in the BLOB storage account.

You can stop here if this is a personal project and you don’t need HTTPS support or a custom domain. In my case, I did want to go all the way, so here’s how to get that working as well.

  1. Get a domain name. Make it sassy ;). Make sure your domain registrar allows you to edit the CNAME records for your domain. This is pretty standard, but not all cheap web hosters allow this and you need it later on to hook up your domain to Azure.
  2. Set up an Azure CDN endpoint for your static site. I picked the Microsoft CDN option which is the most basic one, so you don’t need any accounts with a third party CDN provider.
  3. Now you can map your custom domain to your Azure CDN endpoint using a CNAME record.
  4. Create an HTTPS certificate for your site on Azure with just a few clicks. I was afraid this was going to be hard but it’s so damn easy it’s beautiful. There really is no excuse anymore to let your site just sit there on HTTP these days.
  5. Last thing to do is set up some caching rules for the CDN. We don’t want to be hitting the “slow” BLOB storage all the time and use the faster CDN instead. Depending on the option you chose for the CDN this will differ, but if you picked the Microsoft one you have to use the Standard rules engine to set your caching rules. If you picked Akamai or Verizon, you can use CDN caching rules instead.
    For a simple setup on the Microsoft CDN, go to the CDN settings Rules engine page, and set a global cache expiration rule to override and an expiration you like.
    After a few minutes you’ll see the cache header appear in your HTTP requests.
  6. Here you can also create a rule to redirect HTTP traffic to HTTPS, so people don’t accidentally hit the insecure version.

One more tip on the CDN. You can also purge the CDN cache after you pushed an update to your site to apply the changes, before your CDN cache expires. This is handy if you’ve set a rather big expiration time, because you don’t expect the site to change very often.

From the CDN account, you can purge content on a specific path, or everything at once.