Category Archives: microsoft

publish a static website to Azure using GitHub actions

Last post I talked about setting up a serverless website on Azure using BLOB storage. What I didn’t go into is how to publish files to that site automatically. Yes, you can use the BLOB explorer to manually upload your files but seriously, who wants to do that kind of boring task if you can let computers do that for you.

Instead, what I do to publish this excellent programming guidelines website is the following:

  • I make my changes locally and test it.
  • I commit & push my changes to the master branch of my git repository.
  • A GitHub action kicks in and published the site to the Azure BLOB container.

How sweet is that? Pretty sweet, I know. How do you set this up? Well let me take you through the steps my friend, and automated Git deployment will soon be yours to enjoy as well.

  • You need to create a Git repository on GitHub.
    Now that you can create unlimited private repositories, you don’t even have to expose it to the public, which is nice.
  • Clone the repo locally, and create a source/ directory in it. This is where the source code will go, and that’s what we’ll push to the Azure BLOB container. Any other files you don’t want published go in the root, or in other folders in the root of your repository.
  • Copy your source code into the source/ folder, or create a simple index.html file for testing the publishing action.
  • Go to your repository page on the GitHub site, and click the Actions tab at the top.
  • Click New Workflow, choose “set up a workflow yourself”.
  • It will now create a YAML file for you containing your workflow code.
  • Paste the content for your YAML file listed below. Notice the “source” folder in there? That indicates what folder will be copied to Azure.
    In case you run into trouble, you can dig in to the Azure Storage Action setup yourself, but it should do the trick.
on: [push]
jobs:
  build:
    runs-on: ubuntu-latest
    steps: 
    - uses: actions/checkout@v1
    - uses: actions/setup-dotnet@v1
      with:
        dotnet-version: '3.0.100'
    - uses: lauchacarro/Azure-Storage-Action@v1.0
      with:
        enabled-static-website: 'true'
        folder: 'source'
        index-document: 'index.html'
        error-document: '404.html' 
        connection-string: ${{ secrets.CONNECTION_STRING }}
  • Last step is to set up that CONNECTION_STRING secret. This is the connection string to your Azure storage container. You can set the secret from your GitHub repository Settings tab, under Secrets.
    Click New Secret, then use the name CONNECTION_STRING and paste the access key value from your Azure storage account.

You’re all set up!
To test your publishing flow, all you need to do now is push a commit to your master branch, and see the GitHub action kick in and do its thing.
You should see your site appear in a few seconds. Sweet!

Update: recently I found out the workflow broke because of a bug in the latest version of the action. To bypass this I now fixed the version in my workflow YAML file to v1.0, which still works. It’s probably a good idea to avoid this kind of breaking changes by fixing the version of any action you use in your GitHub actions anyway. It will avoid those annoying issues where things work one day, and don’t the next.

how to host a serverless static website on azure

For my little gfpg project I wanted to put a simple static website online without having to set up and maintain a web server. I read about going serverless with a static site using S3 on AWS, but I wanted to try that on Azure instead. BLOB storage seemed the obvious alternative to S3, but it took some searching around and finding the right documentation on MSDN to get it all up and running.

If you’re on a similar quest to publish some static content to Azure BLOB storage as a serverless website, this short guide will help you along.

  1. First of all we need to create an Azure BLOB storage account for the site. The most important part is to choose a general-purpose v2 Standard storage account, for the account kind. This is the only type that supports hosting a static website. Guess who didn’t do that.
  2. Next thing is to enable static hosting of your files. This will create a $web folder in your storage account, which will be the root folder of your website. It’s that simple.
  3. Copy your files into the $web folder using the Storage explorer blade in the Storage account menu, or the Storage explorer app. You can already test your site using the Azure endpoint.
The Storage explorer is a quick and easy way to upload and manage your files in the BLOB storage account.

You can stop here if this is a personal project and you don’t need HTTPS support or a custom domain. In my case, I did want to go all the way, so here’s how to get that working as well.

  1. Get a domain name. Make it sassy ;). Make sure your domain registrar allows you to edit the CNAME records for your domain. This is pretty standard, but not all cheap web hosters allow this and you need it later on to hook up your domain to Azure.
  2. Set up an Azure CDN endpoint for your static site. I picked the Microsoft CDN option which is the most basic one, so you don’t need any accounts with a third party CDN provider.
  3. Now you can map your custom domain to your Azure CDN endpoint using a CNAME record.
  4. Create an HTTPS certificate for your site on Azure with just a few clicks. I was afraid this was going to be hard but it’s so damn easy it’s beautiful. There really is no excuse anymore to let your site just sit there on HTTP these days.
  5. Last thing to do is set up some caching rules for the CDN. We don’t want to be hitting the “slow” BLOB storage all the time and use the faster CDN instead. Depending on the option you chose for the CDN this will differ, but if you picked the Microsoft one you have to use the Standard rules engine to set your caching rules. If you picked Akamai or Verizon, you can use CDN caching rules instead.
    For a simple setup on the Microsoft CDN, go to the CDN settings Rules engine page, and set a global cache expiration rule to override and an expiration you like.
    After a few minutes you’ll see the cache header appear in your HTTP requests.
  6. Here you can also create a rule to redirect HTTP traffic to HTTPS, so people don’t accidentally hit the insecure version.

One more tip on the CDN. You can also purge the CDN cache after you pushed an update to your site to apply the changes, before your CDN cache expires. This is handy if you’ve set a rather big expiration time, because you don’t expect the site to change very often.

From the CDN account, you can purge content on a specific path, or everything at once.

.NET Core 3.x Docker app hangs on opening a SQL server database connection

This is searchable on the internet, but a little write-up might be useful if you are running into the same problem.
Recently we upgraded some .NET Core applications running in a Docker container to .NET Core 3.1. A funny thing happened when we tested those apps, which is they simply stopped working and didn’t throw any errors to make us wiser on what the issue was.

That funny thing wasn’t so funny to be frank.

When figuring out what was going on we noticed that the app seemed to be hanging on creating a database connection. No exception was thrown, we didn’t get a timeout, it just got stuck on opening the connection.
After searching around a bit we ran into a number of GitHub issues related to this on .NET Core repos, where this one (https://github.com/dotnet/SqlClient/issues/201) sums it all up.

It turns out that the Docker image used for 3.1 is a Debian based image. On this image the OpenSSL TLS restrictions have been increased only allowing TLS 1.2 connections with a strong cipher, compared to the image used to build a 2.1 app. If your SQL server does not meet all of these requirements the .NET Core 3.x app will hang on trying to set up the database connection.

Great, but how do you fix this? Well there are 2 workarounds you can apply.

#1 Use the Ubuntu image instead of the Debian one.

By default, the Debian image is used if you let Visual Studio generate the Dockerfile. If you change this to one of the Ubuntu images, you are fine.

So instead of this in your Dockerfile:

FROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim AS base

You can use this:

FROM mcr.microsoft.com/dotnet/core/runtime:3.1-bionic AS base

If you don’t like using the Ubuntu image for some reason, you can still go for door number 2.

#2 Update the OpenSSL configuration

That OpenSSL configuration is stored in /etc/ssl/openssl.cnf. The lines that are the culprit are the MinProtocol setting and the CipherString.
Depending on what your issue is (TSL 1.2 not available, or your cipher strength) you can change one, or both of these lines.
The Debian config currently looks like:

MinProtocol = TLSv1.2
CipherString = DEFAULT@SECLEVEL=2

Depending on what you need, you can lower the SECLEVEL to allow not-so-good ciphers, or lower the minimum protocol level if you can’t use TLS 1.2.
That would look like this if you’d have to change both lines:

MinProtocol = TLSv1
CipherString = DEFAULT@SECLEVEL=1

In your Dockerfile you can do this with the following statements:

RUN sed -i 's/MinProtocol = TLSv1.2/MinProtocol = TLSv1/' /etc/ssl/openssl.cnf
RUN sed -i 's/CipherString = DEFAULT@SECLEVEL=2/CipherString = DEFAULT@SECLEVEL=1/' /etc/ssl/openssl.cnf

The best idea is to get the SQL guys to upgrade security on the SQL server itself and make sure the default works. But in corporate environments this is something that can take a while and you probably need to get that app running yesterday, so…

windows 10 upgrade on a dell xps 17 / L702x

I clean installed Windows 10 on my Dell XPS 17 L702x a while ago already and this post is merely here to indicate: yes, it works and it’s not even a big deal. This also got rid of Dell software junk preinstalled on my machine which is awesome.

Dell doesn’t support the Windows 10 upgrade though, which is why it scares people and your mileage may vary, but in my case the only thing that didn’t work anymore after the upgrade is my SD card reader. The drivers for that device haven’t been updated since the Windows 7 version and I didn’t find anything more recent. It took me a few months to find out it was broken, so that shows how much I need that thing.

So to summarize:

You can upgrade your OS or do a clean install with Windows 10, but your SD card reader will not work afterwards. If your life depends on that card slot on the left, it’s wise not to upgrade.

As always, be careful with drastic operations like this and backup your data first. For real this time. Plenty of stuff can go wrong and knowing that your data is safe makes it a lot less stressful if shit does happen to hit the fan.

querying Elasticsearch with Powershell with a little help from Kibana

KeyboardKibana is a great dash-boarding tool to easily query an Elasticsearch store on the fly without having to know exactly how to write an Elasticsearch query. For example if you’re using Logstash to dump all your logfiles into an Elasticsearch DB and use Kibana to nail down that specific weird exception you’re seeing.
Kibana is great to show some graphs and give a pretty good overview, but what if you want that query data and do some processing on that? You can’t really export it from the dashboard, but for each of those table or graph panels on your dashboard you can click the “Inspect” button and see what Elasticsearch query is used to get the data for the panel.

It looks something like this:

curl -XGET 'http://yourserver:9999/logstash_index_production/_search?pretty' -d '{
"query": { ...
}'

This is a curl statement and contains all you need to run the same query using PowerShell. The easiest thing to do is to copy the whole JSON statement into a text file and strip out the curl bit and the URL. You keep the URL handy because that’s the URL you’ll need to target in the Invoke-Restmethod call.
If you refactor it into something like the statements below and save it as a .ps1 file you can run it from the command-line and get the results back as PowerShell objects parsed from the JSON result. Yes. PowerShell is that cool. ;)

$elasticQuery = @"
{
"query": { ... }
}
"@

$elasticUri = 'http://yourserver:9999/logstash_index_production/_search?pretty'
Invoke-Restmethod -uri $elasticUri -method POST -Body $elasticQuery

To store the results in a local variable you just run it like this:

$r = .\RunElasticQuery.ps1

Now you’re free to run all sorts of funky processing on the data or perhaps dump it to a CSV file.

If you’re good enough at the Elasticsearch DSL you can even skip the Kibana query shortcut and modify the query itself in your PowerShell script.

Photo by Jeroen Bennink, cc-licensed.

invoke-webrequest pro tips

The Invoke-WebRequest PowerShell cmdlet is great if you want to fetch and work with some web page’s output without installing any extra tools like wget.exe for example. If you’re planning to do some text parsing on a web page anyway, PowerShell is an excellent option, so why not go full PS mode?
Unfortunately the command has some drawbacks, causing it to be a lot slower than it should be if you just want plain text and its response parsing can even cause it to lock up and not return a result at all.

So here’re some pro-tips for parsing the output using PowerShell fast and effectively:

1. Use basic parsing

The cmdlet does some DOM parsing by default using Internet Explorer. This takes time and sometimes fails too, so if you want to skip this bit and make things faster, simply add the command-line switch UseBasicParsing:

$r = Invoke-WebRequest https://n3wjack.net -UseBasicParsing

2. Split HTML in lines

Parsing text in PS is easy, but it’s even easier if the result is formatted like a text file with multiple lines instead of the full HTML in a single string. If you get the Content property from your webpage, you can split it up into separate lines by splitting on the newline character:

(Invoke-WebRequest https://n3wjack.net -UseBasicParsing).Content -split "`n"

Or, if you also want the HTTP header info to be included in the result, use RawContent instead:

(Invoke-WebRequest https://n3wjack.net -UseBasicParsing).RawContent -split "`n"

This can be really handy if you want to automatically check if the right response headers are set.
But you can also use the Headers collection on the result object, which is even easier.

3. Disable download progress meter shizzle to download large files (or always to speed things up)

That download progress bar is a nice visual and all when you’re using Invoke-WebRequest to download some large binaries and want to see its progress, but it significantly slows things down too. Set the $progressPreference variable and you’ll see your scripts download those files a lot faster.
The larger the files (like big ass log files, images, video’s etc) the more this matters I’ve noticed.

$progressPreference = 'silentlyContinue'
invoke-webrequest $logurl -outfile .\logfile.log -UseBasicParsing
$progressPreference = 'Continue'

Be sure to reset this setting afterwards, because this affects any cmdlet using that progress-bar feature.

4. No redirects please.

Invoke-WebRequest automatically follows an HTTP redirect (301/302) so you end up with the page you were looking for in most cases.
If you want to test if a URL is properly redirected (or not redirected) this just makes things harder. In that case you can turn off redirects by using the MaximumRedirection parameter and setting it to 0

When you get a URL that returns a 301 when doing this, the command will throw an exception saying the maximum redirection counts has been exceeded. This makes this case easier to test.
The result object will also contain the redirect StatusCode.

$r = Invoke-WebRequest http://n3wjack.net -MaximumRedirection 0

5. Use the PowerShell result object

It’s overkill in some cases, but in others this is pure win. The result object contains some really handy bits of the webpage, making a lot of tricky text and regex parsing obsolete.
It’s a piece of cake to parse all images linked from a page using the Image collection. Want to parse all outgoing links on a page? Use the Links collection. There’s also a StatusCode, a Headers collection a Forms and Inputfield collection for form parsing and more.
Check out what’s available using Get-Members:

Invoke-WebRequest https://n3wjack.net | get-members

4. If all else fails, use wget.exe

Yep. Sometimes Invoke-WebRequest simply doesn’t cut it. I’ve seen it hang on some complex pages trying to parse them and fail miserably.
In that case you can fetch the page using the GNU WGet tool, download the page as a text file and then parse that.
You have to call wget by adding the exe extension part otherwise you’ll be triggering the PowerShell alias for Invoke-WebRequest again.

# Install WGet with Chocolatey
choco install wget

# Get the page and save it as a text file
wget.exe https://n3wjack.net -O nj.html
# Read the file and parse it.
get-content nj.html | % { # parsing code goes here }

That’s all the tips to have to make your web request parsing in PowerShell a breeze. If you know a good tip that isn’t listed here, feel free to drop a comment below!