Category Archives: tips

A practical guide to using KeePass password manager

Thinking about using a password manager that is free, secure and you have your doubts about the online ones? Well lucky you, this is just the post you are looking for.
With all these hacks and breaches going around you shouldn’t be reusing passwords and you know it. Instead, you can let a password manager generate long and gibberish-like random passwords for all your logins. That way hackers have to throw a thousand cores and millions of years at it before they can crack them. If they crack one anyway, it won’t matter much because it will only work on that one site.
Trusting all your passwords to a piece of software? Is that a good idea? What about if I need my passwords on another machine, or my phone? What if I’m on vacation?
I’ve been storing all my passwords in KeePass for many years now, so I’ll share my setup. You can use this as inspiration to set up your own KeePass flow.

Why KeePass

There are a few cloud-based alternatives out there but when I started with KeePass those weren’t around yet or I didn’t know about them.
I thought about switching to one but eventually didn’t because:

  1. They are not free or have limited free-plans.
  2. They are using proprietary software, so you can’t tell how they work and if they really do store your passwords safely. KeePass however is open source and has been audited for security in the past.
  3. Storing all your passwords on a server owned by someone else without a local backup sounds like a bad idea to me.
  4. Some can’t be used for things other than websites. Like desktop app credentials. Or even SSH logins and other weird and geeky stuff you need random secrets for.

Yes they are slightly more convenient and look a bit more polished. But for me that doesn’t weigh up against the extra control I get with KeePass.

Installing KeePass

KeePass exists for Windows, Linux, macOS and Android. It’s a typical installation. If you’re as geeky and paranoid as me you download it from the main site and you check the md5 hash of the installation files. That way you’re 100% sure you didn’t download some altered or hacked version. It hasn’t happened with KeePass before, but it did happen to the Linux Mint ISO’s at one point so you can never be sure.

There is a getting started guide on the KeePass website that guides you through setting up and creating a first database. This Lifehacker post does the same thing and also has some nice screenshots for guidance.

Securing your password database

When it comes to securing your password database you have to make sure your master password to unlock it is of course a pretty damn good one. It has to be as long as possible (at least 10 characters, but more is better), higher case, lower case, number, special characters, the whole shebang. On top of that, you’ll have to be able to remember it too. So I guess this is one of the hardest bits.
There are tricks to make this easy though. Think of a good phrase you can easily remember. Or any list of words. Take the first or first few letters of each word, mix it up with some special characters and you end up with something hard to crack and easy to remember.
Or just come up with a good passphrase of random words you can remember. Don’t use Correct Horse Battery Staple or a popular lyrics phrase because they are probably in some password list database already. You can use a word list to generate a random password using the EFF word lists and some dice, or use one of the many generators online.

Just be original. Or try anyway.

Small steps

When I started out I didn’t trust KeePass enough to dump and change all my passwords from day 1. I started out simple, by adding new sites I registered to and use randomly generated passwords from the built in password generator. Later I added sites I frequently used and changed their passwords to more complex ones. Now everything is in there. But not every password is random though. Really important accounts I have in my head too, using a unique, complex password that I can still remember. Really important accounts also have 2-factor authentication activated so even if a hacker finds the password, they still won’t get in.
Knowing those key passwords is also a fallback in case I don’t have access to my KeePass DB for some reason.

Syncing the DB

Now you want to use this on more machines than just your laptop I guess.
There are a few options:

  1. You put the DB on a thumb drive you always have on you. This is a good backup too. You can use PortableApps or a portable KeePass version on the thumb drive and use it anywhere like that.
  2. You sync the DB to your favorite cloud drive and sync it to every machine you want to use it on.

I use Dropbox myself which is great for this to sync between home, work and my phone. OneDrive would also work as it works pretty much the same way.
If you want to get your own Dropbox drive (2 GB free), use this link. Use that to get 500 MB bonus space, and so do I ;).
There are also a number of plugins for KeePass to sync to Google Drive, FTP, and other online providers, so I’m sure you’ll find something you like.

On your phone

Phone closeup with simcard and micro SD card.

If you want access to your passwords on your phone, you’ll need some extra apps. I use Android myself, but I’m sure the same apps exist for iOS.
You will need 2 apps, one to be able to open and use the database, and then something to sync the file to your phone. Unless you do that manually, but I wouldn’t advise it.

To use the database there are plenty of options when you search for KeePass, but the best one I’ve used so far is Keepass2Android.

For syncing the file to my phone I use Dropsync. This syncs a Dropbox folder to a folder your phone. You can use the free version if you’re only setting up 2 folders.
You can also use the Dropbox app itself and mark the file to be available offline, but I’ve noticed this doesn’t always work. I often ended up with an old version of the database when I needed it.
Maybe in the future this’ll get better, but until then, Dropsync is what I’m using.

Extensions

KeePass has a ton of plugins allowing you to customize it for all sorts of things. There are plugins to have it integrate in your browser, synchronize files over all sorts of protocols and services, export, import, add visual features and whatever.

I use as little plugins as possible though, as each plugin has access to your database and can be a possible vulnerability. Yes. Tin-foil hat here. But LastPass’ Chrome plugin leaked your login credentials a while ago, so there you go.

By using the standard keyboard shortcuts on PC you can get a long way already. Be sure to check out the Auto-Type override documentation if you have a website which isn’t playing nice with the defaults. You can find a way to get it to work for 99% of the websites out there. The other 1% just have really shitty UX.

download cover art for all your albums, with powershell

Album covers are nice eye candy when you’re using a media player like Foobar2000 which automatically picks up a cover.jpg file in an album folder. The problem is that I have a lot of mp3 albums I ripped from my CD’s from way back and those don’t have any fancy cover art files.

I looked around for some tools that could automagically download covers for my albums but didn’t find anything handy. Since my music is structured in sub-folders like \ I thought this should be easy enough to parse and get pictures for.
If only there was a service that could easily provide those…

I tried the Musicbrainz API’s but that turned out to be hard to use and didn’t give me any covers for some test albums either. Then I thought of Last.fm. They have a lot of cover art, and their URL structure is the same as my folder structure… hmmm.

And here it is, a Powershell script which runs over your folder structure, tries to get the album page from Last.fm and then saves a cover.jpg image from the album page metadata.

A few things to know:

  • Your mp3’s are expected to be in a folder structure like (artist)\(album)\*.mp3
    E.g. The Prodigy\The Fat of the Land
  • If a folder contains any JPG or PNG image, it will be skipped. So that means you can run the script multiple times, and it will only download images once.
  • The “Various artists” folder is skipped by default because it didn’t fit the search pattern. If you store these type of albums in another folder, you might want to update that line with the correct folder name. If it does happen to process that folder in your case because of a different name, nothing will go wrong. It simply won’t find any album matches.

To use it, copy the code below in a file called get-albumart.ps1, or whatever name you fancy. Then run it as follows to get those pretty cover albums:

.\get-albumart.ps1 d:\music

And as always, this script comes as is, without any kind of warranty and you’re free to try it at your own risk. I wrote and used it and it worked great for me. I hope it works for you too. If Last.fm sues you because you’re downloading every image they have on the site because of your huge album collection? You didn’t get this script from me OK. Nope. Not me. ;-)

param ([Parameter(Mandatory=$true)][string]$path)

$progressPreference = 'silentlyContinue'
pushd
cd $path
$artistFolders = ls -directory | where { $_.name -ne "Various artists"}

foreach ($artistFolder in $artistFolders)
{
    $artist = $artistFolder.name
    write-host "::: $artist :::" -foregroundcolor green

    cd -Literalpath $artistFolder
    $releaseFolders = ls -directory
    
    foreach ($releaseFolder in $releaseFolders)
    {
        $release = $releaseFolder.name
        write-host "$release" -foregroundcolor cyan
        cd -literalpath "$releaseFolder"

        if ((test-path *.png) -or (test-path *.jpg))
        {
            write-host "- Images found, skipping."
        }
        else
        {
            $url = "https://www.last.fm/music/$($artist)/$($release)"
            $r = $null

            try 
            {
                $r = invoke-webrequest $url -usebasicparsing
            }
            catch 
            {
                write-host "- Release not found, skipping: $artist - $release" -foregroundcolor red
            }

            if ($r -ne $null)
            {
                $s = $r.content -split "`n" | where { $_ -like "*`"og:image`"*"} 
                $img = ($s -split '"') | where { $_ -like "*https*.jpg*" }

                if ($img -ne $null)
                {
                    write-host "- Downloading image for $artist - $release from $url"
                    invoke-webrequest $img -outfile cover.jpg
                }
                else
                {
                    write-host "- No image for $artist - $release from $url" -foregroundcolor yellow
                }
            }
        }
        cd ..
    }
    cd ..
}

popd
$progressPreference = 'Continue'

tweet from the command line

I could've used a screenshot of a command line window but instead you get a nice humming bird. Because, you know, twitter.

It’s so simple it’s hardly worth a blog post, but since a simple tweet isn’t so easy to find using a search engine, I’ll just put it up here anyway.
Let’s say you come up with this brilliant joke or insight and you want to tweet it instantly to the world. Now you have to open up a browser, type in that dreadfully long twitter.com URL, wait for the site to load, type in your tweet and hit send.
Man. That’s a lot of work.

But what if you could just enter this from the command line?

tweet OMG I love tweeting from the command line

Wow. That would be awesome. Because you always have a console window open anyway, being an edgy and trendy developer using all those nifty command line tools right?
You betcha.

So how about that awesome batch script? What does that look like? Well here you go:

@start "" "https://mobile.twitter.com/compose/tweet?text=%*"

That’s all it takes. Save that as tweet.cmd and put it somewhere that it’s in your PATH environment variable so Windows can find it and run it.
It’ll launch the twitter mobile site, and all you’ll have to do is hit “Send”.
So sweet.

cool vim tips and tricks you might not know

Vim has tons of awesome shortcuts and features you pick up over time. Some of those I don’t use every day so I have to write them down so I can look them back up when I can’t remember exactly how it works. Instead of keeping them locked away in a text file, I’ll throw them online here and spread the Vim love. None of these need any special plugins. They should all work right out of the box with plain old Vim.

If you want to know more about a specific command listed here, use the Vim :help command to find out more. There are usually more options and possibilities for each of these commands and the Vim documentation is excellent.

Here we go!

When you are on the command line using a console application and you want to edit the output in Vim right away, or open that long list of possible command line switches in Vim for reference, this one will come in handy.
I’m using GVim here because since that opens in a separate window from your shell, this is the most useful.

ls *.txt | gvim -
docker -h | gvim -
git --help | gvim -

This one is for opening a ton of files in a single Vim instance from Powershell, in different tabs. This means you are running this from a Powershell console of course.

gvim -p (ls *.ps1)

For more Vim command line options run this in your favorite shell environment:

vim -h
gvim -h

How about opening a remote file, or fetch HTML from a page over HTTP using Vim:

:e https://n3wjack.net/

When you work with log files a lot, being able to delete lines containing or not containing a specific word can be just what you need.
These two commands are perfect to filter out obsolete exceptions and figure out what is causing that nasty production issue:

:g/DeleteAll/d
:v/DeleteAllButThis/d

Did you know that Vim has a spell checker? I didn’t know that at the beginning (try :h spell for more details).
To activate/deactivate:

:set (no)spell

To jump to the next / previous misspelled word:

]s
[s

To get a list of spelling suggestions (or use the mouse in GVim, which is quite practical):

z=

You can add a simple XML-tidy shortcut to your .vimrc file by adding the following command.
What it does is setting the file type to XML, removes spaces between opening & closing brackets, add a return character in-between the opening & closing brackets and finally formats the document so it looks all nice and indented.

nmap <leader>ppx <Esc>:set filetype=xml<CR>:%s/> *</></g<CR>:%s/></>\r</g<CR><ESC>gg=G<Esc>:noh<CR>

You can force syntax highlighting in Vim as follows for e.g. HTML, XML and Markdown.
Of course this works for a ton of other file types as well, as long as you can figure out what the extension/file type key is. But that’s pretty easy in most cases.

:set syntax=html
:set syntax=xml
:set syntax=md

I add shortcuts for any files I frequently edit by using the leader key and a letter combination that’s easy to remember.
For example this one to edit my custom .vimrc file when I press the leader key followed by “e” and “v” (edit vimrc).

nnoremap <Leader>ev :tabe ~\vimfiles\nj-vimrc.vim<CR>

That’s about it. For more nice Vim tips check out more Vim posts. Another good resource for bite sized Vim tips is the MasteringVim on twitter and it’s newsletter.

querying Elasticsearch with Powershell with a little help from Kibana

KeyboardKibana is a great dash-boarding tool to easily query an Elasticsearch store on the fly without having to know exactly how to write an Elasticsearch query. For example if you’re using Logstash to dump all your logfiles into an Elasticsearch DB and use Kibana to nail down that specific weird exception you’re seeing.
Kibana is great to show some graphs and give a pretty good overview, but what if you want that query data and do some processing on that? You can’t really export it from the dashboard, but for each of those table or graph panels on your dashboard you can click the “Inspect” button and see what Elasticsearch query is used to get the data for the panel.

It looks something like this:

curl -XGET 'http://yourserver:9999/logstash_index_production/_search?pretty' -d '{
"query": { ...
}'

This is a curl statement and contains all you need to run the same query using PowerShell. The easiest thing to do is to copy the whole JSON statement into a text file and strip out the curl bit and the URL. You keep the URL handy because that’s the URL you’ll need to target in the Invoke-Restmethod call.
If you refactor it into something like the statements below and save it as a .ps1 file you can run it from the command-line and get the results back as PowerShell objects parsed from the JSON result. Yes. PowerShell is that cool. ;)

$elasticQuery = @"
{
"query": { ... }
}
"@

$elasticUri = 'http://yourserver:9999/logstash_index_production/_search?pretty'
Invoke-Restmethod -uri $elasticUri -method POST -Body $elasticQuery

To store the results in a local variable you just run it like this:

$r = .\RunElasticQuery.ps1

Now you’re free to run all sorts of funky processing on the data or perhaps dump it to a CSV file.

If you’re good enough at the Elasticsearch DSL you can even skip the Kibana query shortcut and modify the query itself in your PowerShell script.

Photo by Jeroen Bennink, cc-licensed.

invoke-webrequest pro tips

The Invoke-WebRequest PowerShell commandlet is great if you want to get and work with some web page’s output without installing any extra tools like wget.exe for example. If you’re planning to do some text parsing on a web page anyway, PowerShell is an excellent option, so why not go full PS mode?
Unfortunately the command has some drawbacks causing it to be a lot slower than it should be if you just want plain text and it’s response parsing can even cause it to lock up and not return a result at all.

So here’s some pro-tips for parsing the output using PowerShell fast and effectively:

1. Use basic parsing

The commandlet does some DOM parsing by default using Internet Explorer. This takes time and sometimes fails too, so if you want to skip this bit and make things faster, simply add the command-line switch UseBasicParsing:

$r = Invoke-WebRequest https://n3wjack.net -UseBasicParsing

2. Split html in lines

Parsing text in PS is easy, but it’s even easier if the result is formatted like a text file with multiple lines instead of the full HTML in a single string. If you get the Content property from your webpage, you can split it up into separate lines by splitting on the newline character:

(Invoke-WebRequest https://n3wjack.net -UseBasicParsing).Content -split "`n"

Or, if you also want the HTTP header info to be included in the result, use RawContent instead:

(Invoke-WebRequest https://n3wjack.net -UseBasicParsing).RawContent -split "`n"

This can be really handy if you want to automatically check if the right response headers are set.
But you can also use the Headers collection on the result object, which is even easier.

3. Disable download progress meter shizzle to download large files (or always to speed things up)

That download progress bar is a nice visual and all when you’re using Invoke-WebRequest to download some large binaries and want to see it’s progress, but it significantly slows things down too. Set the $progressPreference variable and you’ll see your scripts download those files a lot faster.
The larger the files (like big as log files, images, video’s etc) the more this matters I’ve noticed.

$progressPreference = 'silentlyContinue'
invoke-webrequest $logurl -outfile .\logfile.log -UseBasicParsing
$progressPreference = 'Continue'

Be sure to reactivate this setting afterwards, because this affects any commandlet using that progress-bar feature.

4. No redirects please.

Invoke-WebRequest automatically follows an HTTP redirect (301/302) so you end up with the page you where looking for in the most cases.
If you want to test if a URL is properly redirected (or not redirected) this just makes things harder. In that case you can turn off redirects by using the MaximumRedirection parameter and setting it to 0

When you get a URL that returns a 301 when doing this, the command will throw an exception saying the maximum redirection counts has been exceeded. This makes this case easier to test.
The result object will also contain the redirect StatusCode.

$r = Invoke-WebRequest http://n3wjack.net -MaximumRedirection 0

5. Use the PowerShell result object

It’s overkill in some cases, but in others this is pure win. The result object contains some really handy bits of the webpage, making a lot of tricky text and regex parsing obsolete.
It’s a piece of cake to parse all images linked from a page using the Image collection. Want to parse all outgoing links on a page? Use the Links collection. There’s also a StatusCode, a Headers collection a Forms and Inputfield collection for form parsing and more.
Check out what’s available using Get-Members:

Invoke-WebRequest https://n3wjack.net | get-members

4. If all else fails, use wget.exe

Yep. Sometimes Invoke-WebRequest simply doesn’t cut it. I’ve seen it hang on some complex pages trying to parse it and failing miserably.
In that case you get fetch the page using the GNU WGet tool, download the page as a text file and then parse that.
You have to call wget by adding the exe extension part otherwise you’ll be triggering the PowerShell alias for Invoke-WebRequest again.

# Install WGet with Chocolatey
choco install wget

# Get the page and save it as a text file
wget.exe https://n3wjack.net -O nj.html
# Read the file and parse it.
get-content nj.html | % { # parsing code goes here }