Category Archives: geek

download cover art for all your albums, with powershell

Album covers are nice eye candy when you’re using a media player like Foobar2000 which automatically picks up a cover.jpg file in an album folder. The problem is that I have a lot of mp3 albums I ripped from my CD’s from way back and those don’t have any fancy cover art files.

I looked around for some tools that could automagically download covers for my albums but didn’t find anything handy. Since my music is structured in sub-folders like \ I thought this should be easy enough to parse and get pictures for.
If only there was a service that could easily provide those…

I tried the Musicbrainz API’s but that turned out to be hard to use and didn’t give me any covers for some test albums either. Then I thought of Last.fm. They have a lot of cover art, and their URL structure is the same as my folder structure… hmmm.

And here it is, a Powershell script which runs over your folder structure, tries to get the album page from Last.fm and then saves a cover.jpg image from the album page metadata.

A few things to know:

  • Your mp3’s are expected to be in a folder structure like (artist)\(album)\*.mp3
    E.g. The Prodigy\The Fat of the Land
  • If a folder contains any JPG or PNG image, it will be skipped. So that means you can run the script multiple times, and it will only download images once.
  • The “Various artists” folder is skipped by default because it didn’t fit the search pattern. If you store these type of albums in another folder, you might want to update that line with the correct folder name. If it does happen to process that folder in your case because of a different name, nothing will go wrong. It simply won’t find any album matches.

To use it, copy the code below in a file called get-albumart.ps1, or whatever name you fancy. Then run it as follows to get those pretty cover albums:

.\get-albumart.ps1 d:\music

And as always, this script comes as is, without any kind of warranty and you’re free to try it at your own risk. I wrote and used it and it worked great for me. I hope it works for you too. If Last.fm sues you because you’re downloading every image they have on the site because of your huge album collection? You didn’t get this script from me OK. Nope. Not me. ;-)

param ([Parameter(Mandatory=$true)][string]$path)

$progressPreference = 'silentlyContinue'
pushd
cd $path
$artistFolders = ls -directory | where { $_.name -ne "Various artists"}

foreach ($artistFolder in $artistFolders)
{
    $artist = $artistFolder.name
    write-host "::: $artist :::" -foregroundcolor green

    cd -Literalpath $artistFolder
    $releaseFolders = ls -directory
    
    foreach ($releaseFolder in $releaseFolders)
    {
        $release = $releaseFolder.name
        write-host "$release" -foregroundcolor cyan
        cd -literalpath "$releaseFolder"

        if ((test-path *.png) -or (test-path *.jpg))
        {
            write-host "- Images found, skipping."
        }
        else
        {
            $url = "https://www.last.fm/music/$($artist)/$($release)"
            $r = $null

            try 
            {
                $r = invoke-webrequest $url -usebasicparsing
            }
            catch 
            {
                write-host "- Release not found, skipping: $artist - $release" -foregroundcolor red
            }

            if ($r -ne $null)
            {
                $s = $r.content -split "`n" | where { $_ -like "*`"og:image`"*"} 
                $img = ($s -split '"') | where { $_ -like "*https*.jpg*" }

                if ($img -ne $null)
                {
                    write-host "- Downloading image for $artist - $release from $url"
                    invoke-webrequest $img -outfile cover.jpg
                }
                else
                {
                    write-host "- No image for $artist - $release from $url" -foregroundcolor yellow
                }
            }
        }
        cd ..
    }
    cd ..
}

popd
$progressPreference = 'Continue'

tweet from the command line

I could've used a screenshot of a command line window but instead you get a nice humming bird. Because, you know, twitter.

It’s so simple it’s hardly worth a blog post, but since a simple tweet isn’t so easy to find using a search engine, I’ll just put it up here anyway.
Let’s say you come up with this brilliant joke or insight and you want to tweet it instantly to the world. Now you have to open up a browser, type in that dreadfully long twitter.com URL, wait for the site to load, type in your tweet and hit send.
Man. That’s a lot of work.

But what if you could just enter this from the command line?

tweet OMG I love tweeting from the command line

Wow. That would be awesome. Because you always have a console window open anyway, being an edgy and trendy developer using all those nifty command line tools right?
You betcha.

So how about that awesome batch script? What does that look like? Well here you go:

@start "" "https://mobile.twitter.com/compose/tweet?text=%*"

That’s all it takes. Save that as tweet.cmd and put it somewhere that it’s in your PATH environment variable so Windows can find it and run it.
It’ll launch the twitter mobile site, and all you’ll have to do is hit “Send”.
So sweet.

cool vim tips and tricks you might not know

Vim has tons of awesome shortcuts and features you pick up over time. Some of those I don’t use every day so I have to write them down so I can look them back up when I can’t remember exactly how it works. Instead of keeping them locked away in a text file, I’ll throw them online here and spread the Vim love. None of these need any special plugins. They should all work right out of the box with plain old Vim.

If you want to know more about a specific command listed here, use the Vim :help command to find out more. There are usually more options and possibilities for each of these commands and the Vim documentation is excellent.

Here we go!

When you are on the command line using a console application and you want to edit the output in Vim right away, or open that long list of possible command line switches in Vim for reference, this one will come in handy.
I’m using GVim here because since that opens in a separate window from your shell, this is the most useful.

ls *.txt | gvim -
docker -h | gvim -
git --help | gvim -

This one is for opening a ton of files in a single Vim instance from Powershell, in different tabs. This means you are running this from a Powershell console of course.

gvim -p (ls *.ps1)

For more Vim command line options run this in your favorite shell environment:

vim -h
gvim -h

How about opening a remote file, or fetch HTML from a page over HTTP using Vim:

:e https://n3wjack.net/

When you work with log files a lot, being able to delete lines containing or not containing a specific word can be just what you need.
These two commands are perfect to filter out obsolete exceptions and figure out what is causing that nasty production issue:

:g/DeleteAll/d
:v/DeleteAllButThis/d

Did you know that Vim has a spell checker? I didn’t know that at the beginning (try :h spell for more details).
To activate/deactivate:

:set (no)spell

To jump to the next / previous misspelled word:

]s
[s

To get a list of spelling suggestions (or use the mouse in GVim, which is quite practical):

z=

You can add a simple XML-tidy shortcut to your .vimrc file by adding the following command.
What it does is setting the file type to XML, removes spaces between opening & closing brackets, add a return character in-between the opening & closing brackets and finally formats the document so it looks all nice and indented.

nmap <leader>ppx <Esc>:set filetype=xml<CR>:%s/> *</></g<CR>:%s/></>\r</g<CR><ESC>gg=G<Esc>:noh<CR>

You can force syntax highlighting in Vim as follows for e.g. HTML, XML and Markdown.
Of course this works for a ton of other file types as well, as long as you can figure out what the extension/file type key is. But that’s pretty easy in most cases.

:set syntax=html
:set syntax=xml
:set syntax=md

I add shortcuts for any files I frequently edit by using the leader key and a letter combination that’s easy to remember.
For example this one to edit my custom .vimrc file when I press the leader key followed by “e” and “v” (edit vimrc).

nnoremap <Leader>ev :tabe ~\vimfiles\nj-vimrc.vim<CR>

That’s about it. For more nice Vim tips check out more Vim posts. Another good resource for bite sized Vim tips is the MasteringVim on twitter and it’s newsletter.

windows 10 upgrade on a dell xps 17 / L702x

I clean installed Windows 10 on my Dell XPS 17 L702x a while ago already and this post is merely here to indicate: yes, it works and it’s not even a big deal. This also got rid of Dell software junk preinstalled on my machine which is awesome.

Dell doesn’t support the Windows 10 upgrade though, which is why it scares people and your mileage may vary, but in my case the only thing that didn’t work anymore after the upgrade is my SD card reader. The drivers for that device haven’t been updated since the Windows 7 version and I didn’t find anything more recent. It took me a few months to find out it was broken, so that shows how much I need that thing.

So to summarize:

You can upgrade your OS or do a clean install with Windows 10, but your SD card reader will not work afterwards. If your life depends on that card slot on the left, it’s wise not to upgrade.

As always, be careful with drastic operations like this and backup your data first. For real this time. Plenty of stuff can go wrong and knowing that your data is safe makes it a lot less stressful if shit does happen to hit the fan.

querying Elasticsearch with Powershell with a little help from Kibana

KeyboardKibana is a great dash-boarding tool to easily query an Elasticsearch store on the fly without having to know exactly how to write an Elasticsearch query. For example if you’re using Logstash to dump all your logfiles into an Elasticsearch DB and use Kibana to nail down that specific weird exception you’re seeing.
Kibana is great to show some graphs and give a pretty good overview, but what if you want that query data and do some processing on that? You can’t really export it from the dashboard, but for each of those table or graph panels on your dashboard you can click the “Inspect” button and see what Elasticsearch query is used to get the data for the panel.

It looks something like this:

curl -XGET 'http://yourserver:9999/logstash_index_production/_search?pretty' -d '{
"query": { ...
}'

This is a curl statement and contains all you need to run the same query using PowerShell. The easiest thing to do is to copy the whole JSON statement into a text file and strip out the curl bit and the URL. You keep the URL handy because that’s the URL you’ll need to target in the Invoke-Restmethod call.
If you refactor it into something like the statements below and save it as a .ps1 file you can run it from the command-line and get the results back as PowerShell objects parsed from the JSON result. Yes. PowerShell is that cool. ;)

$elasticQuery = @"
{
"query": { ... }
}
"@

$elasticUri = 'http://yourserver:9999/logstash_index_production/_search?pretty'
Invoke-Restmethod -uri $elasticUri -method POST -Body $elasticQuery

To store the results in a local variable you just run it like this:

$r = .\RunElasticQuery.ps1

Now you’re free to run all sorts of funky processing on the data or perhaps dump it to a CSV file.

If you’re good enough at the Elasticsearch DSL you can even skip the Kibana query shortcut and modify the query itself in your PowerShell script.

Photo by Jeroen Bennink, cc-licensed.

invoke-webrequest pro tips

The Invoke-WebRequest PowerShell commandlet is great if you want to get and work with some web page’s output without installing any extra tools like wget.exe for example. If you’re planning to do some text parsing on a web page anyway, PowerShell is an excellent option, so why not go full PS mode?
Unfortunately the command has some drawbacks causing it to be a lot slower than it should be if you just want plain text and it’s response parsing can even cause it to lock up and not return a result at all.

So here’s some pro-tips for parsing the output using PowerShell fast and effectively:

1. Use basic parsing

The commandlet does some DOM parsing by default using Internet Explorer. This takes time and sometimes fails too, so if you want to skip this bit and make things faster, simply add the command-line switch UseBasicParsing:

$r = Invoke-WebRequest https://n3wjack.net -UseBasicParsing

2. Split html in lines

Parsing text in PS is easy, but it’s even easier if the result is formatted like a text file with multiple lines instead of the full HTML in a single string. If you get the Content property from your webpage, you can split it up into separate lines by splitting on the newline character:

(Invoke-WebRequest https://n3wjack.net -UseBasicParsing).Content -split "`n"

Or, if you also want the HTTP header info to be included in the result, use RawContent instead:

(Invoke-WebRequest https://n3wjack.net -UseBasicParsing).RawContent -split "`n"

This can be really handy if you want to automatically check if the right response headers are set.
But you can also use the Headers collection on the result object, which is even easier.

3. Disable download progress meter shizzle to download large files (or always to speed things up)

That download progress bar is a nice visual and all when you’re using Invoke-WebRequest to download some large binaries and want to see it’s progress, but it significantly slows things down too. Set the $progressPreference variable and you’ll see your scripts download those files a lot faster.
The larger the files (like big as log files, images, video’s etc) the more this matters I’ve noticed.

$progressPreference = 'silentlyContinue'
invoke-webrequest $logurl -outfile .\logfile.log -UseBasicParsing
$progressPreference = 'Continue'

Be sure to reactivate this setting afterwards, because this affects any commandlet using that progress-bar feature.

4. No redirects please.

Invoke-WebRequest automatically follows an HTTP redirect (301/302) so you end up with the page you where looking for in the most cases.
If you want to test if a URL is properly redirected (or not redirected) this just makes things harder. In that case you can turn off redirects by using the MaximumRedirection parameter and setting it to 0

When you get a URL that returns a 301 when doing this, the command will throw an exception saying the maximum redirection counts has been exceeded. This makes this case easier to test.
The result object will also contain the redirect StatusCode.

$r = Invoke-WebRequest http://n3wjack.net -MaximumRedirection 0

5. Use the PowerShell result object

It’s overkill in some cases, but in others this is pure win. The result object contains some really handy bits of the webpage, making a lot of tricky text and regex parsing obsolete.
It’s a piece of cake to parse all images linked from a page using the Image collection. Want to parse all outgoing links on a page? Use the Links collection. There’s also a StatusCode, a Headers collection a Forms and Inputfield collection for form parsing and more.
Check out what’s available using Get-Members:

Invoke-WebRequest https://n3wjack.net | get-members

4. If all else fails, use wget.exe

Yep. Sometimes Invoke-WebRequest simply doesn’t cut it. I’ve seen it hang on some complex pages trying to parse it and failing miserably.
In that case you get fetch the page using the GNU WGet tool, download the page as a text file and then parse that.
You have to call wget by adding the exe extension part otherwise you’ll be triggering the PowerShell alias for Invoke-WebRequest again.

# Install WGet with Chocolatey
choco install wget

# Get the page and save it as a text file
wget.exe https://n3wjack.net -O nj.html
# Read the file and parse it.
get-content nj.html | % { # parsing code goes here }