Category Archives: geek

push a static website to Azure using GitHub actions

Last post I talked about setting up a serverless website on Azure using BLOB storage. What I didn’t go into is how to publish files to that site automatically. Yes, you can use the BLOB explorer to manually upload your files but seriously, who wants to do that kind of boring task if you can let computers do that for you.

Instead, what I do to publish this excellent programming guidelines website is the following:

  • I make my changes locally and test it.
  • I commit & push my changes to the master branch of my git repository.
  • A GitHub action kicks in and published the site to the Azure BLOB container.

How sweet is that? Pretty sweet, I know. How do you set this up? Well let me take you through the steps my friend, and automated Git deployment will soon be yours to enjoy as well.

  • You need to create a Git repository on GitHub.
    Now that you can create unlimited private repositories, you don’t even have to expose it to the public, which is nice.
  • Clone the repo locally, and create a source/ directory in it. This is where the source code will go, and that’s what we’ll push to the Azure BLOB container. Any other files you don’t want published go in the root, or in other folders in the root of your repository.
  • Copy your source code into the source/ folder, or create a simple index.html file for testing the publishing action.
  • Go to your repository page on the GitHub site, and click the Actions tab at the top.
  • Click New Workflow, choose “set up a workflow yourself”.
  • It will now create a YAML file for you containing your workflow code.
  • Paste the content for your YAML file listed below. Notice the “source” folder in there? That indicates what folder will be copied to Azure.
    In case you run into trouble, you can dig in to the Azure Storage Action setup yourself, but it should do the trick.
on: [push]
jobs:
  build:
    runs-on: ubuntu-latest
    steps: 
    - uses: actions/checkout@v1
    - uses: actions/setup-dotnet@v1
      with:
        dotnet-version: '3.0.100'
    - uses: lauchacarro/Azure-Storage-Action@master
      with:
        enabled-static-website: 'true'
        folder: 'source'
        index-document: 'index.html'
        error-document: '404.html' 
        connection-string: ${{ secrets.CONNECTION_STRING }}
  • Last step is to set up that CONNECTION_STRING secret. This is the connection string to your Azure storage container. You can set the secret from your GitHub repository Settings tab, under Secrets.
    Click New Secret, then use the name CONNECTION_STRING and paste the access key value from your Azure storage account.

You’re all set up!
To test your publishing flow, all you need to do now is push a commit to your master branch, and see the GitHub action kick in and do its thing.
You should see your site appear in a few seconds. Sweet!

programming guidelines, sort of

Years ago I ran into a website offering crude design advice. I thought it would be funny to make something similar for programming advice or guidelines. I started with a one-page website with a bunch of tips and then after a while forgot about it. Recently I ran into that project again and figured I might as well put it out here for the heck of it.

So here it is, some good fucking programming guidelines for you developers out there to have a laugh with, or perhaps even find a few useful tips and links in there. I swear, most of those tips are actually valid, even though they are presented in a tongue in cheek way.

So have fun with it. I know I did when I built the damn thing.

Conway’s game of life

Recently the mathematician John Horton Conway passed away, who invented the well known Game Of Life. An algorithmic game I ended up building a version of in HTML at some point for fun. After seeing the news I remembered I still had this sitting around on my hard drive somewhere. So in memoriam, here it is to play around with.

It’s in an iframe, but if it doesn’t render properly you can use the direct link as well.
Thanks for the games John. RIP.

use a PowerShell script as a Vim filter to encode a url

Artistic closeup of a .vimrc file

Vim filters are cool. They let you run the content of your current buffer through a command and have its output returned into the bugger. That means that you can use filters to edit text using any command your operating system has available.

Now on a Linux machine that’s quite handy. On a Windows machine that isn’t so handy, because the default shell is CMD and that doesn’t have all those handy Unix text manipulating utilities. But what about a PowerShell script that would encode or decode a URL for example?

It turns out that wasn’t as easy as I expected, so I’ll spill it here in case you’re looking to do something similar.

First calling the script from Vim means you have to call powershell.exe because by default Vim uses cmd.exe as its shell. That isn’t so hard:

nnoremap <Leader>dc :%!powershell.exe -noprofile -nologo -file c:\tools\decode-string.ps1<CR>
nnoremap <Leader>ec :%!powershell.exe -noprofile -nologo -file c:\tools\encode-string.ps1<CR>

I create 2 shortcuts here to encode (ec) and decode (dc) trigger using the leader key.
These will pipe the content of your buffer through the script, and get whatever it spits out through the standard output back into your buffer.
The extra command line parameters I pass into powershell.exe are to make it as fast as possible and not do any unnecessary junk. The -noprofile is the most important one as this skips loading any special modules you have set up in your PowerShell profile.

Reading from the pipe in a PowerShell script wasn’t as easy as I thought. Using the ValueFromPipeline attribute on the input parameter didn’t work for some reason. After some searching I came across a magical $input parameter that gets anything you receive from standard input for use in your script. Handy.
So the script was as simple as doing this:

[Reflection.Assembly]::LoadWithPartialName("System.Web") | Out-Null
[System.Web.HttpUtility]::UrlEncode($input)

What do we do when PowerShell can’t help us out of the box? That’s right. We summon the powers of .NET assemblies to get the job done. In this case, we’re summoning System.Web and using the HttpUtility class to encode the incoming data.
We do exactly the same to decode text by the way:

[Reflection.Assembly]::LoadWithPartialName("System.Web") | Out-Null
[System.Web.HttpUtility]::UrlDecode($input)

This is pretty powerful. By using the $input variable in PowerShell together with the Vim filter function, you can do all sorts of text transformations. This can make your developer life a lot easier.

fight corona with folding@home on a headless ubuntu linux box

I wrote about using Grid computing to fight cancer a while back using leftover Azure credits.
So now with Corona having the world in its grip, it’s time to shift our attention to that nasty virus and throw some CPU cycles at it instead.

If you have an Ubuntu machine running somewhere and you want to install the Folding@Home client on it, you can do so by following the simple steps listed below.

First download the FAH client application:

wget https://download.foldingathome.org/releases/public/release/fahclient/debian-testing-64bit/v7.4/fahclient_7.4.4_amd64.deb

Then, run the installer. It will guide you through the setup process.

sudo dpkg -i --force-depends fahclient_7.4.4_amd64.deb

Just choose a name and an optional team. You best let it start automatically as well.

That’s basically it. Since you’re running it all headless, there’s no need for any of the other packages listed on the site.

You can check if things are running by checking the log file stored in /var/lib/fahclient/log.txt

If you want to change the configuration after installing the client in /etc/fahclient/config.xml you have to start and stop the client like this:

sudo /etc/init.d/FAHClient stop
sudo /etc/init.d/FAHClient start

If you had the World Grid Computing client installed already, you can stop that by using:

sudo /etc/init.d/boinc-client stop

If you want to full install instructions for the FAHclient or installation instructions for other flavors of Linux, check out the official installation guide.
Once it’s running and processed a few work units, you can check your stats at this URL: https://stats.foldingathome.org/donor/yourname

Have fun kicking Corona ass!

.NET Core 3.x Docker app hangs on opening a SQL server database connection

This is searchable on the internet, but a little write-up might be useful if you are running into the same problem.
Recently we upgraded some .NET Core applications running in a Docker container to .NET Core 3.1. A funny thing happened when we tested those apps, which is they simply stopped working and didn’t throw any errors to make us wiser on what the issue was.

That funny thing wasn’t so funny to be frank.

When figuring out what was going on we noticed that the app seemed to be hanging on creating a database connection. No exception was thrown, we didn’t get a timeout, it just got stuck on opening the connection.
After searching around a bit we ran into a number of GitHub issues related to this on .NET Core repos, where this one (https://github.com/dotnet/SqlClient/issues/201) sums it all up.

It turns out that the Docker image used for 3.1 is a Debian based image. On this image the OpenSSL TLS restrictions have been increased only allowing TLS 1.2 connections with a strong cipher, compared to the image used to build a 2.1 app. If your SQL server does not meet all of these requirements the .NET Core 3.x app will hang on trying to set up the database connection.

Great, but how do you fix this? Well there are 2 workarounds you can apply.

#1 Use the Ubuntu image instead of the Debian one.

By default, the Debian image is used if you let Visual Studio generate the Dockerfile. If you change this to one of the Ubuntu images, you are fine.

So instead of this in your Dockerfile:

FROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim AS base

You can use this:

FROM mcr.microsoft.com/dotnet/core/runtime:3.1-bionic AS base

If you don’t like using the Ubuntu image for some reason, you can still go for door number 2.

#2 Update the OpenSSL configuration

That OpenSSL configuration is stored in /etc/ssl/openssl.cnf. The lines that are the culprit are the MinProtocol setting and the CipherString.
Depending on what your issue is (TSL 1.2 not available, or your cipher strength) you can change one, or both of these lines.
The Debian config currently looks like:

MinProtocol = TLSv1.2
CipherString = DEFAULT@SECLEVEL=2

Depending on what you need, you can lower the SECLEVEL to allow not-so-good ciphers, or lower the minimum protocol level if you can’t use TLS 1.2.
That would look like this if you’d have to change both lines:

MinProtocol = TLSv1
CipherString = DEFAULT@SECLEVEL=1

In your Dockerfile you can do this with the following statements:

RUN sed -i 's/MinProtocol = TLSv1.2/MinProtocol = TLSv1/' /etc/ssl/openssl.cnf
RUN sed -i 's/CipherString = DEFAULT@SECLEVEL=2/CipherString = DEFAULT@SECLEVEL=1/' /etc/ssl/openssl.cnf

The best idea is to get the SQL guys to upgrade security on the SQL server itself and make sure the default works. But in corporate environments this is something that can take a while and you probably need to get that app running yesterday, so…