Category Archives: programming

querying Elasticsearch with Powershell with a little help from Kibana

KeyboardKibana is a great dash-boarding tool to easily query an Elasticsearch store on the fly without having to know exactly how to write an Elasticsearch query. For example if you’re using Logstash to dump all your logfiles into an Elasticsearch DB and use Kibana to nail down that specific weird exception you’re seeing.
Kibana is great to show some graphs and give a pretty good overview, but what if you want that query data and do some processing on that? You can’t really export it from the dashboard, but for each of those table or graph panels on your dashboard you can click the “Inspect” button and see what Elasticsearch query is used to get the data for the panel.

It looks something like this:

curl -XGET 'http://yourserver:9999/logstash_index_production/_search?pretty' -d '{
"query": { ...
}'

This is a curl statement and contains all you need to run the same query using PowerShell. The easiest thing to do is to copy the whole JSON statement into a text file and strip out the curl bit and the URL. You keep the URL handy because that’s the URL you’ll need to target in the Invoke-Restmethod call.
If you refactor it into something like the statements below and save it as a .ps1 file you can run it from the command-line and get the results back as PowerShell objects parsed from the JSON result. Yes. PowerShell is that cool. ;)

$elasticQuery = @"
{
"query": { ... }
}
"@

$elasticUri = 'http://yourserver:9999/logstash_index_production/_search?pretty'
Invoke-Restmethod -uri $elasticUri -method POST -Body $elasticQuery

To store the results in a local variable you just run it like this:

$r = .\RunElasticQuery.ps1

Now you’re free to run all sorts of funky processing on the data or perhaps dump it to a CSV file.

If you’re good enough at the Elasticsearch DSL you can even skip the Kibana query shortcut and modify the query itself in your PowerShell script.

Photo by Jeroen Bennink, cc-licensed.

invoke-webrequest pro tips

The Invoke-WebRequest PowerShell commandlet is great if you want to get and work with some web page’s output without installing any extra tools like wget.exe for example. If you’re planning to do some text parsing on a web page anyway, PowerShell is an excellent option, so why not go full PS mode?
Unfortunately the command has some drawbacks causing it to be a lot slower than it should be if you just want plain text and it’s response parsing can even cause it to lock up and not return a result at all.

So here’s some pro-tips for parsing the output using PowerShell fast and effectively:

1. Use basic parsing

The commandlet does some DOM parsing by default using Internet Explorer. This takes time and sometimes fails too, so if you want to skip this bit and make things faster, simply add the command-line switch UseBasicParsing:

$r = Invoke-WebRequest https://n3wjack.net -UseBasicParsing

2. Split html in lines

Parsing text in PS is easy, but it’s even easier if the result is formatted like a text file with multiple lines instead of the full HTML in a single string. If you get the Content property from your webpage, you can split it up into separate lines by splitting on the newline character:

(Invoke-WebRequest https://n3wjack.net -UseBasicParsing).Content -split "`n"

Or, if you also want the HTTP header info to be included in the result, use RawContent instead:

(Invoke-WebRequest https://n3wjack.net -UseBasicParsing).RawContent -split "`n"

This can be really handy if you want to automatically check if the right response headers are set.
But you can also use the Headers collection on the result object, which is even easier.

3. Disable download progress meter shizzle to download large files (or always to speed things up)

That download progress bar is a nice visual and all when you’re using Invoke-WebRequest to download some large binaries and want to see it’s progress, but it significantly slows things down too. Set the $progressPreference variable and you’ll see your scripts download those files a lot faster.
The larger the files (like big as log files, images, video’s etc) the more this matters I’ve noticed.

$progressPreference = 'silentlyContinue'
invoke-webrequest $logurl -outfile .\logfile.log -UseBasicParsing
$progressPreference = 'Continue'

Be sure to reactivate this setting afterwards, because this affects any commandlet using that progress-bar feature.

4. No redirects please.

Invoke-WebRequest automatically follows an HTTP redirect (301/302) so you end up with the page you where looking for in the most cases.
If you want to test if a URL is properly redirected (or not redirected) this just makes things harder. In that case you can turn off redirects by using the MaximumRedirection parameter and setting it to 0

When you get a URL that returns a 301 when doing this, the command will throw an exception saying the maximum redirection counts has been exceeded. This makes this case easier to test.
The result object will also contain the redirect StatusCode.

$r = Invoke-WebRequest http://n3wjack.net -MaximumRedirection 0

5. Use the PowerShell result object

It’s overkill in some cases, but in others this is pure win. The result object contains some really handy bits of the webpage, making a lot of tricky text and regex parsing obsolete.
It’s a piece of cake to parse all images linked from a page using the Image collection. Want to parse all outgoing links on a page? Use the Links collection. There’s also a StatusCode, a Headers collection a Forms and Inputfield collection for form parsing and more.
Check out what’s available using Get-Members:

Invoke-WebRequest https://n3wjack.net | get-members

4. If all else fails, use wget.exe

Yep. Sometimes Invoke-WebRequest simply doesn’t cut it. I’ve seen it hang on some complex pages trying to parse it and failing miserably.
In that case you get fetch the page using the GNU WGet tool, download the page as a text file and then parse that.
You have to call wget by adding the exe extension part otherwise you’ll be triggering the PowerShell alias for Invoke-WebRequest again.

# Install WGet with Chocolatey
choco install wget

# Get the page and save it as a text file
wget.exe https://n3wjack.net -O nj.html
# Read the file and parse it.
get-content nj.html | % { # parsing code goes here }

passing a function as a parameter in PowerShell

reflectionsA powerful pattern in software engineering is where you pass in a function or an object to another, which executes this given one dynamically. This allows you to extend behavior of code without having to edit that code. It’s known as the Strategy Pattern and allows for nice, clean and decoupled code.

In Powershell you can do this by passing a function as an argument to another function. When I tried to do this, I found out this wasn’t as trivial as I thought it would be, so here’s the nitty gritty on to work that magical extend-ability pattern.

Basically you want to do something like passing in a function which processes a single item in a loop controlled by another piece of code. Something like:

function Print-Number($number)
{
    echo "Number is $number"
}

function Do-Loop($function)
{
    $numbers = 1..10
    foreach ($number in $numbers)
    {
        # Here we should call $function and pass in $number
    }    
}

# Call Do-Loop with the PrintNumber function as a parameter here.

I ran into 2 problems here.

  1. How do I pass in a function as a reference to another one?
  2. How do I call one of those function blocks from the function that receives it?

First things first. The syntax for passing in a function inline looks like this:

Do-Loop ${function:Print-Number}

Note the function: prefix there. It’s the magic bit. This accesses the function object’s script block without executing it.
You can also list all functions available in your PowerShell session like this:

ls function:

If you want to pass in a script block like an anonymous lamba C# style, without defining a function first, you can do this:

Do-Loop { param($content) write-host $content }

Want to reuse that function a few times and store it in a variable? No problemo, just do this:

$function = { param($content) write-host $content }
Do-Loop $function
Do-SomethingElse $function

That pretty much covers all the options for problem number 1.
So now on the second problem: calling that passed in function or script block inside our host function.

Let’s say we want to call that function from a loop. To call the function you need to use the Invoke-Command commandlet and pass in the argument using the ArgumentList parameter like this:

foreach ($number in $numbers)
{
    Invoke-Command $function -ArgumentList $number
}

Pretty simple right?
The argument list expects an array as it’s value. So if you want to pass in 2 parameters like a number and a message text that would look something like this:

Invoke-Command $function -ArgumentList $number, $message

Putting it all together, here’s the working sample code:

function Print-Number($number)
{
    echo "Number is $number"
}

function Do-Loop($function)
{
    $numbers = 1..3
    foreach ($number in $numbers)
    {
        Invoke-Command $function -ArgumentList $number
    }    
}

Do-Loop ${function:\Print-Number}

Because of this array-as-a-parameter thing however I did run into a little snag for my actual code.
What if the first and only parameter is an array in itself? How do make it clear to the Invoke-Command commandlet that the array is a single parameter, not a list of parameters to pass into the function?

In my case I was passing in the content of a text file which is an array of strings. My first argument ended up being the first string of that array and I was lacking the rest of the file’s lines.

$array = Get-Content .\somefile.txt
Invoke-Command $function -ArgumentList $array # ?????

Again, there’s a little trick to that which I found on Stack Overflow. The syntax to pass in an array as a parameter is:

Invoke-Command $function -ArgumentList (,data)

The last bit seems to work by creating an array (that comma) where the first element is nothing and the second is our array.
Apparently the first element is then skipped and our second is passed in as the required array parameter.
A silly example to demonstrate this:

function Enhance($lines)
{
    $lines | % { "  > $_" }
}

function Do-It($function)
{
    $content = get-content .\awesome.txt
    Invoke-Command $function -ArgumentList (,$content)
}

Do-It ${function:\Enhance}

fixing MSB3644 build errors and the point of .NET targeting packs

Build errors on build servers suck. If it builds locally, why the hell doesn’t it build on the build server? Well there’s plenty of reasons for that, but as a .NET developer is usually means that something you have on your machine that came with your Visual Studio install isn’t installed on the build server.
But you probably don’t want to install the full-blown VS on the build server, so the question now is: what bit do I need to install?

Recently I ran into the following build error on 1 specific build server (yep, not on another one, fun, fun, fun).

C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\bin\Microsoft.Common.CurrentVersion.targets(1111, 5): error MSB3644: The reference assemblies for framework ".NETFramework,Version=v4.5.2" were not found. To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend.

It clearly has something to do with a project which targets .NET framework v4.5.2.
The answer in fact is right there. You need to install a “targeting pack”.
But WTF is that thing and why do I need it?

Apparently having the .NET framework itself installed on your machine isn’t enough to be able to build a project if that project targets a specific version of the .NET framework. It makes sense as the .NET framework installation is actually the runtime, used to run .NET applications, not build them.
If you want to build apps against a specific version of the .NET framework, you need that targeting pack as well on your build machine. This is either your development box, which has VS on it, and thus the required targeting packs which come with the VS installation. Or this can be your build server, where you have to install the targeting packs as well.

You can check what targeting packs you already have on a machine by checking the sub-folders in “C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework
For each supported framework version, there’s a v<version number> folder there. In my case there was no v4.5.2 folder on that one machine.

So where do you find those targeting packs for the various .NET framework versions? Microsoft luckily compiled a nice list where you can find all the download links and instructions.
For the versions listed as “included in VS 2017”, you can see them all listed in the VS 2017 installer if you go to the “Individual components” tab.

Look at all those .NET targeting packs.

So a shortcut to install all the packs at once, is to grab the VS2017 installer and use that one. But you might want to disable all the IDE specific stuff you won’t need on the build server though.

make vim awesome with plugins

.vimrc

Vim is a great lightweight editor as it is. But after setting it all up on your windows box and tweaking your _vimrc it still might lack that bit of awesome you’re looking for in a modern text editor.

Time to spice things up with plugins!

Vim plugins are written in viml or vimscript, an internal script language in the vim editor, and are plain .vim files containing scripting code which extend vim in all sorts of wonderful ways. There are tons of vim scripts out there so finding the right ones for your needs takes a bit of time. There are however some helpful guides out there and blog posts like this one to help you on your way. I’ll list some of those and links to more plugins at the end.

Installing those scripts and plugins can be tedious though. Download a zip, unpack, copy files, yada-yada-yada. Since we’re into package managers these days we want things to go automatically with a few keystrokes.

Enter Vundle.

You’ve probably guessed by now this is a vim plugin manager (and a plugin by itself). It allows you to install, update and search for available vim scripts among other things. I like this one in particular because it does this all from vim itself with a number of specific commands.
To get started you’ll have to install this one manually though, but it only takes a few command line statements and some .vimrc edits. Once you have this up and running, you’ll be able to install most plugins using it so it’s worth the hassle.

Check out the info on the Vundle github page on how to install and then come back here. ;)

Alright. Now, what plugins should we get?
Well it depends on what you want to do of course, but here are some general purpose ones you might like.

vim-airline

A pretty looking status bar you’ll see in a lot of vim screenshots. It’s tweakable so you choose what kind of info you want it to show.

vim airline status bar plugin

ervandew/supertab

Adds tab completion to vim using the tab-key. That might sound odd but the default use a bunch of control keys so this just comes more “natural”.

The-NERD-tree

This is a directory browsing plugin which is just better than Netrw which comes out of the box. Visually you can fold/unfold folders, search (use any vim command in the window), manipulate files etc. Very handy to keep track of a project when programming or just see what other files are in a folder without having to exit vim.

vim nerdtree plugin

CtrlP

A fuzzy file search plugin. Press Ctrl-P and you’ll get list at the bottom of files in your current directory. Type in some characters of the filename you are looking for and it will filter the files matching those characters. So you don’t need to know the full name, and you can skip parts. Check out this video to get an idea of how it works.

One note on this. If you have a folder with a lot of files in the sub-folder tree (like a C# application with build files in the sub-folder) be sure to exclude any non relevant types like object & dll files. CtrlP has a maximum file limit and those irrelevant files can stop you from finding those you actually want to see.

In my _vimrc I use this to exclude the .NET build artifacts and some more irrelevant file types:

set wildignore+=*\\obj\\*,*\\bin\\*,*.swp,*.zip,*.exe,*.dll

vim-fontzoom

vim-fontzoom is a simple plugin that allows you to increase or decrease your vim font size using the plus or minus key when you are in command mode. Note that this doesn’t work with the +/- on your numeric pad, just with the regular keys on your keyboard. But you can remap the keys if you want to change this.

chriskempson/base16-vim themes

Not really a functional plugin but hey, you want your editor to look pretty right? I’ve tried a ton of themes already but lately I’m sticking with the chriskempson/base16-vim set. In this package you get a bunch of nicely crafted and balanced color themes which will definitely have something you like. Dark and light themes, monokai, solarized and other classics, it has it all. The last theme plugin you’ll ever need.

Moar!

Depending on you workload there are plenty of more specific plugins out there. Google is your friend, but here’s a few places to get started:

  • The easiest to use and most awesome Vim plugin directory is called VimAwesome. Great to find new plugins, or great to find old ones and how to install them. Each plugin lists how to install it with Vundle or another plugin manager, which is super handy.
  • The 15 best vim plugins according to Steve Francia who made a vim distro called The Ultimate Vim Distribution, so I guess he knows what he’s talking about. :)

 

reactjs.net clearscriptv8 load error after publishing a website

Crash Here’s another ReactJS.NET quirk I ran into lately. While working on an ASP.NET site using the ReactJS.NET Nuget package to render content using React.JS server side we got this error message on the web server after deployment:

Cannot load V8 interface assembly. Load failure information for ClearScriptV8-64.dll:
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\root\4e3fedda\f5d1e0ef\assembly\dl3\68f03368\00cf5237_117bd201\ClearScriptV8-64.dll: Could not load file or assembly 'file:///C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\root\4e3fedda\f5d1e0ef\assembly\dl3\68f03368\00cf5237_117bd201\ClearScriptV8-64.dll' or one of its dependencies. The system cannot find the file specified.

The annoying bit here is that everything works fine locally, but not when it was deployed to the server using an Octopus package.
After debugging, searching online and going through the build log files I figured out that the problems could be caused by :

  • Missing DLL’s because of a missing Nuget package (quite obvious but not a problem in my case).
  • The VS 2013 C++ Redistributables are not installed on the server, which is a common cause for this error related to the ClearScript assemblies.

In my case the problem was a variation of the 1st problem. Once the web app was published, the DLL files where not in the expected bin\x64 & bin\x32 sub-folders, but in the root of the application’s bin folder. So the DLL’s where there, but not in the correct spot.

The cause of this problem is the _CopyWebApplicatonLegacy MS Build task which lives in the Microsoft.WebApplication.targets file.
This creates a _PublishedWebsites folder containing all the files you need to deploy to make your website run, yet it doesn’t trigger the build task used in the JavaScriptEngineSwitcher NuGet package which places those DLL’s in those x64 & x32 sub-folders. This special build task is included in the ReactJS.NET Nuget package, so you don’t have to do anything extra for this.
Because this task doesn’t get triggered in the step build to create the deployment package, the ClearScript-V8-64.dll, v8-x64.dll & ClearScript-V8-32.dll, v8-x32.dll don’t end up in their x64 & x32 sub-folders.

I fixed this by moving the files to their rightful location while creating the package for Octopus deploy with a PowerShell script. There’s probably a way to fix this with an extra build tasks too, but man, I spent so much time on finding this issue in the first place and I really didn’t feel like getting myself into that mess as well.

Photo by Ted Van Pelt, cc-licensed.