Category Archives: programming

publish a static website to Azure using GitHub actions

Last post I talked about setting up a serverless website on Azure using BLOB storage. What I didn’t go into is how to publish files to that site automatically. Yes, you can use the BLOB explorer to manually upload your files but seriously, who wants to do that kind of boring task if you can let computers do that for you.

Instead, what I do to publish this excellent programming guidelines website is the following:

  • I make my changes locally and test it.
  • I commit & push my changes to the master branch of my git repository.
  • A GitHub action kicks in and published the site to the Azure BLOB container.

How sweet is that? Pretty sweet, I know. How do you set this up? Well let me take you through the steps my friend, and automated Git deployment will soon be yours to enjoy as well.

  • You need to create a Git repository on GitHub.
    Now that you can create unlimited private repositories, you don’t even have to expose it to the public, which is nice.
  • Clone the repo locally, and create a source/ directory in it. This is where the source code will go, and that’s what we’ll push to the Azure BLOB container. Any other files you don’t want published go in the root, or in other folders in the root of your repository.
  • Copy your source code into the source/ folder, or create a simple index.html file for testing the publishing action.
  • Go to your repository page on the GitHub site, and click the Actions tab at the top.
  • Click New Workflow, choose “set up a workflow yourself”.
  • It will now create a YAML file for you containing your workflow code.
  • Paste the content for your YAML file listed below. Notice the “source” folder in there? That indicates what folder will be copied to Azure.
    In case you run into trouble, you can dig in to the Azure Storage Action setup yourself, but it should do the trick.
on: [push]
jobs:
  build:
    runs-on: ubuntu-latest
    steps: 
    - uses: actions/checkout@v1
    - uses: actions/setup-dotnet@v1
      with:
        dotnet-version: '3.0.100'
    - uses: lauchacarro/Azure-Storage-Action@v1.0
      with:
        enabled-static-website: 'true'
        folder: 'source'
        index-document: 'index.html'
        error-document: '404.html' 
        connection-string: ${{ secrets.CONNECTION_STRING }}
  • Last step is to set up that CONNECTION_STRING secret. This is the connection string to your Azure storage container. You can set the secret from your GitHub repository Settings tab, under Secrets.
    Click New Secret, then use the name CONNECTION_STRING and paste the access key value from your Azure storage account.

You’re all set up!
To test your publishing flow, all you need to do now is push a commit to your master branch, and see the GitHub action kick in and do its thing.
You should see your site appear in a few seconds. Sweet!

Update: recently I found out the workflow broke because of a bug in the latest version of the action. To bypass this I now fixed the version in my workflow YAML file to v1.0, which still works. It’s probably a good idea to avoid this kind of breaking changes by fixing the version of any action you use in your GitHub actions anyway. It will avoid those annoying issues where things work one day, and don’t the next.

how to host a serverless static website on azure

For my little gfpg project I wanted to put a simple static website online without having to set up and maintain a web server. I read about going serverless with a static site using S3 on AWS, but I wanted to try that on Azure instead. BLOB storage seemed the obvious alternative to S3, but it took some searching around and finding the right documentation on MSDN to get it all up and running.

If you’re on a similar quest to publish some static content to Azure BLOB storage as a serverless website, this short guide will help you along.

  1. First of all we need to create an Azure BLOB storage account for the site. The most important part is to choose a general-purpose v2 Standard storage account, for the account kind. This is the only type that supports hosting a static website. Guess who didn’t do that.
  2. Next thing is to enable static hosting of your files. This will create a $web folder in your storage account, which will be the root folder of your website. It’s that simple.
  3. Copy your files into the $web folder using the Storage explorer blade in the Storage account menu, or the Storage explorer app. You can already test your site using the Azure endpoint.
The Storage explorer is a quick and easy way to upload and manage your files in the BLOB storage account.

You can stop here if this is a personal project and you don’t need HTTPS support or a custom domain. In my case, I did want to go all the way, so here’s how to get that working as well.

  1. Get a domain name. Make it sassy ;). Make sure your domain registrar allows you to edit the CNAME records for your domain. This is pretty standard, but not all cheap web hosters allow this and you need it later on to hook up your domain to Azure.
  2. Set up an Azure CDN endpoint for your static site. I picked the Microsoft CDN option which is the most basic one, so you don’t need any accounts with a third party CDN provider.
  3. Now you can map your custom domain to your Azure CDN endpoint using a CNAME record.
  4. Create an HTTPS certificate for your site on Azure with just a few clicks. I was afraid this was going to be hard but it’s so damn easy it’s beautiful. There really is no excuse anymore to let your site just sit there on HTTP these days.
  5. Last thing to do is set up some caching rules for the CDN. We don’t want to be hitting the “slow” BLOB storage all the time and use the faster CDN instead. Depending on the option you chose for the CDN this will differ, but if you picked the Microsoft one you have to use the Standard rules engine to set your caching rules. If you picked Akamai or Verizon, you can use CDN caching rules instead.
    For a simple setup on the Microsoft CDN, go to the CDN settings Rules engine page, and set a global cache expiration rule to override and an expiration you like.
    After a few minutes you’ll see the cache header appear in your HTTP requests.
  6. Here you can also create a rule to redirect HTTP traffic to HTTPS, so people don’t accidentally hit the insecure version.

One more tip on the CDN. You can also purge the CDN cache after you pushed an update to your site to apply the changes, before your CDN cache expires. This is handy if you’ve set a rather big expiration time, because you don’t expect the site to change very often.

From the CDN account, you can purge content on a specific path, or everything at once.

programming guidelines, sort of

Years ago I ran into a website offering crude design advice. I thought it would be funny to make something similar for programming advice or guidelines. I started with a one-page website with a bunch of tips and then after a while forgot about it. Recently I ran into that project again and figured I might as well put it out here for the heck of it.

So here it is, some good fucking programming guidelines for you developers out there to have a laugh with, or perhaps even find a few useful tips and links in there. I swear, most of those tips are actually valid, even though they are presented in a tongue in cheek way.

So have fun with it. I know I did when I built the damn thing.

Conway’s game of life

Recently the mathematician John Horton Conway passed away, who invented the well known Game Of Life. An algorithmic game I ended up building a version of in HTML at some point for fun. After seeing the news I remembered I still had this sitting around on my hard drive somewhere. So in memoriam, here it is to play around with.

It’s in an iframe, but if it doesn’t render properly you can use the direct link as well.
Thanks for the games John. RIP.

use a PowerShell script as a Vim filter to encode a url

Artistic closeup of a .vimrc file

Vim filters are cool. They let you run the content of your current buffer through a command and have its output returned into the bugger. That means that you can use filters to edit text using any command your operating system has available.

Now on a Linux machine that’s quite handy. On a Windows machine that isn’t so handy, because the default shell is CMD and that doesn’t have all those handy Unix text manipulating utilities. But what about a PowerShell script that would encode or decode a URL for example?

It turns out that wasn’t as easy as I expected, so I’ll spill it here in case you’re looking to do something similar.

First calling the script from Vim means you have to call powershell.exe because by default Vim uses cmd.exe as its shell. That isn’t so hard:

nnoremap <Leader>dc :%!powershell.exe -noprofile -nologo -file c:\tools\decode-string.ps1<CR>
nnoremap <Leader>ec :%!powershell.exe -noprofile -nologo -file c:\tools\encode-string.ps1<CR>

I create 2 shortcuts here to encode (ec) and decode (dc) trigger using the leader key.
These will pipe the content of your buffer through the script, and get whatever it spits out through the standard output back into your buffer.
The extra command line parameters I pass into powershell.exe are to make it as fast as possible and not do any unnecessary junk. The -noprofile is the most important one as this skips loading any special modules you have set up in your PowerShell profile.

Reading from the pipe in a PowerShell script wasn’t as easy as I thought. Using the ValueFromPipeline attribute on the input parameter didn’t work for some reason. After some searching I came across a magical $input parameter that gets anything you receive from standard input for use in your script. Handy.
So the script was as simple as doing this:

[Reflection.Assembly]::LoadWithPartialName("System.Web") | Out-Null
[System.Web.HttpUtility]::UrlEncode($input)

What do we do when PowerShell can’t help us out of the box? That’s right. We summon the powers of .NET assemblies to get the job done. In this case, we’re summoning System.Web and using the HttpUtility class to encode the incoming data.
We do exactly the same to decode text by the way:

[Reflection.Assembly]::LoadWithPartialName("System.Web") | Out-Null
[System.Web.HttpUtility]::UrlDecode($input)

This is pretty powerful. By using the $input variable in PowerShell together with the Vim filter function, you can do all sorts of text transformations. This can make your developer life a lot easier.

testing CORS headers with PowerShell

CORS can be a pain in the ass to test for your backend service. This because you have to emulate a cross domain request from a browser and sometimes you just want to quickly confirm your CORS configuration or code is working. Can this be done using PowerShell, so you don’t have to fire up a test site to do that request, or use special tooling like Postman or whatever?

Well yes! Using the statement below you can see all the headers that are returned from the request to the test URL on your API. If CORS is correctly configured, you will see the Access-Control-Allow-Origin header in that list. It contains the allowed origin site domain or the asterisk to indicate any domain is allowed.
If not, something is still wrong.


(Invoke-WebRequest https://bar.com/api/stuff/ -UseBasicParsing -Headers @{ "Origin" = "http://www.foo.com/" }).Headers

The trick is passing in the Origin header when you make the request. Without it, you won’t get any CORS headers back in most cases. I tested this using an ASP.NET Core backend by the way, but most backend systems should behave in the same way, as this is how browsers do it.