To see posts by date, check out the archives

2017 by the numbers
Tyler Cipriani Posted

These are some meaningless numbers.

I felt pretty powerless against the rising tide of horribleness that seemingly permeated every aspect of 2017 – that is not captured by these numbers.

Ssh Key Fingerprints, Identicons, and ASCII art
Tyler Cipriani Posted

Security at the expense of usability comes at the expense of security.


Public key authentication is confusing, even for “professionals”. Part of the confusion is that base64-encoded public keys and private keys are just huge globs of meaningless letters and numbers. Even the hashed fingerprints of these keys are just slightly smaller meaningless globs of letters and numbers.

It is a known fact in psychology that people are slow and unreliable at processing or memorizing meaningless strings

Hash Visualization: a New Technique to improve Real-World Security

Ensuring that two keys are the same means comparing key hashes— fingerprints. When using the md5 hash algorithm, comparing a key fingerprint means comparing 16, 16-bit numbers (and for the uninitiated that means blankly staring at 32 meaningless letters and numbers). In practice, that usually means not comparing 32 meaningless letters and numbers except when strictly necessary: Security at the expense of usability comes at the expense of security.

SSH Tricks

I am constantly troubleshooting ssh. I spend a lot of time looking at the authlog and comparing keys.

I’ve learned some fun tricks that I use constantly:

Get fingerprint from public key ssh-keygen(1)
ssh-keygen -l [-E md5] -f [public key]
Generate a public key given a private key ssh-keygen(1)
ssh-keygen -y -f [private key]
Automatically add server key to known_hosts file ssh-keyscan(1):
ssh-keyscan -H [hostname] >> ~/.ssh/known_hosts
List key fingerprints in ssh-agent ssh-agent(1)
ssh-add [-E md5] -l

When I get the message, Permission denied (publickey), I have a protocol.

  1. Find the fingerprint of the key being used by the authenticating host. This will either be in ssh-agent or I may have to use ssh-keygen -l -E md5 -f [publickey] on the authenticating host.
  2. Find the authorized_keys file on the target machine: grep 'AuthorizedKeysFile' /etc/ssh/sshd_config
  3. Compare the fingerprint of the public key from the authenticating host is among the fingerprints listed in the authorized_keys file.

Most ssh problems are caused by (SURPRISE!) the public key of the authenticating host not being present in the AuthorizedKeysFile on the target.

The Worm Bishop

Most of the time when I “compare fingerprints” of keys, I copy, I paste, and finally I use the time-honored global regular expression print command. This is insecure behavior for myriad reasons. The secure way to compare keys is by manually comparing fingerprints, but meaningless string comparison is hard, which makes security hard, and, so, most folks simply aren’t secure. Security at the expense of usability comes at the expense of security.

In the release announcement for OpenSSH 5.1 a different approach to comparing fingerprints was announced:

Visual fingerprinnt [sic] display is controlled by a new ssh_config(5) option “VisualHostKey”. The intent is to render SSH host keys in a visual form that is amenable to easy recall and rejection of changed host keys.

Announce: OpenSSH 5.1 released

The “VisualHostKey” is the source of the randomart ascii that you see if you add the -v flag to the ssh-keygen command from above:

$ ssh-keygen -lv -E md5 -f ~/.ssh/
2048 MD5:b2:c7:2a:77:84:3a:62:97:56:d0:95:49:63:fd:5d:2b (RSA)
+---[RSA 2048]----+
|       .++       |
|       .+..     .|
|     . .   . . ..|
|    . .     .E.. |
|     ...S     .  |
|      o+.        |
|     +..o        |
|  o B .o.        |
| . + +..         |

This is something like an identicon for your ssh key:

Identicons’ intended applications are in helping users recognize or distinguish textual information units in context of many.

– Don Park

Although it is important to note that while the intent of an identicon is to distinguish against many, the intent of the VisualHostKey is more ambiguous.

The work to add this randomart was based on the paper Hash Visualization: a New Technique to improve Real-World Security, and the algorithm that generates these bits of art is discussed in detail in the paper The drunken bishop: An analysis of the OpenSSH fingerprint visualization algorithm. Also, interestingly, while the above paper contains a reference to the apocryphal drunken bishop leaving stacks of coins in each square he’s visited, the code comments in OpenSSH refer to a “worm crawling over a discrete plane leaving a trace […] everywhere it goes”.

Regardless of whether the algorithm’s protagonist is a worm or a bishop the story is similar. There is a discrete plane (or an atrium), the protagonist is in the middle (and possibly drunk), and moves around the room leaving a trail (either because they are slimy or because they are…drunk? and dropping coins because they’re drunk? I guess.), the more times they visit a particular part of the plane/atrium, the slimier/more-coin-filled it becomes. The direction of movement is determined by the little-endian processing of each word in the md5 checksum, so each run should create the same unique randomart for the same unique checksum.

I wrote a simple python version that visualizes the algorithm step-by-step which may be a better explainer than any meaningless strings I can group together:

The Drunken Slime Bishop
The Drunken Slime Bishop

ASCII Art is meaningless characters

Confession time: I have never used randomart even when copying and pasting is impossible—I just compare strings. I have VisualHostKey yes in my ~/.ssh/config, but I almost never look at since OpenSSH warns me if a host-key has changed, so mostly it’s just taking up vertical space.

But why?

I think the reason I don’t use VisualHostKey to help distinguish between public-keys is that it failed to meet the regularity property of hash visualization:

Humans are good at identifying geometric objects (such as circles, rectangles, triangles, and lines), and shapes in general. We call images, which contain mostly recognizable shapes, regular images. If an image is not regular, i.e. does not contain identifiable objects or patterns, or is too chaotic (such as white noise), it is difficult for humans to compare or recall it.

Hash Visualization: a New Technique to improve Real-World Security

Randomized ASCII art that is composed of common letters and frequently used symbols is very nearly the same as a string. The constituent parts are the same. This is the first problem: while Unicode contains symbols that are more recognizable, ASCII contains a very limited set of characters. This is a property that makes ASCII as an art-medium so charming, but makes abstract ASCII-art hard to remember as it fails to form geometric patterns that are easily distinguished from noise.

Secondly, ASCII randomart lacks any color, which is a property mentioned for hash visualizations as well as one of the more distinguishing features of identicons:

humans are very good at identifying geometrical shapes, patterns, and colors, and they can compare two images efficiently

Hash Visualization: a New Technique to improve Real-World Security

Can I do better?

No. Probably not if I had to work under the constraint that these hash visualizations need to work everywhere OpenSSH works. OpenSSH is an amazing piece of software and they are solving Hard™ problems while folks like me write blogs about ASCII art.

Donate to them now…I’ll wait.

For the purposes of differentiation, under the constraint that it must work on all terminals, I like the current solution. My liking the solution didn’t, of course, stop me from fiddling with the existing solution.

Add color

The first stab I took at this was to add color. As an initial try I simply used the first 6 hex digits of the md5 checksum, converted those to rgb, and converted those to true-color ansi output:

def to_rgb(color):
    return int(color[:2], 16), int(color[2:4], 16), int(color[4:], 16)

def to_ansi_rgb(color):
    r, g, b = to_rgb(color)
    return '\x1b[38;2;{};{};{}m'.format(r, g, b)

This actually makes a huge difference in my ability to quickly differentiate between on key and another:

|       .++       |
|       .+..     .|
|     . .   . . ..|
|    . .     .E.. |
|     ...S     .  |
|      o+.        |
|     +..o        |
|  o B .o.        |
| . + +..         |


|      o .        |
|     = +         |
|    . = o        |
|       = +       |
|      . S  .     |
|       o..o .    |
|       o. o=     |
|      . ..=..    |
|    E.   o.+.    |

Becomes purple-ish vs green, which is easy for me, but probably less easy for a sizable portion of the population. Further, while this solution works in terminals that support true-color output, I didn’t even take the time to make it fail gracefully to 8-bit color. Of course, with 8-bit color there are fewer means to differentiate via color. While color is visually distinctive to me, it is likely infeasible to implement, or unhelpful to a large portion of the population. Fair enough.

Different Charsets

Unicode encodes some interesting characters to represent slime nowadays. There is the block element set, the drawing set, and even a micellaneous symbols set. I’m on linux so obviously some of these sets just look like empty boxes to me. But in my slime-bishop repo I did add support for a few symbols.

Oddly, I don’t think the symbols add too much to help differentiation.

|       ░▓▓       |
|       ░▓░░     ░|
|     ░ ░   ░ ░ ░░|
|    ░ ░     ░E░░ |
|     ░░░S     ░  |
|      ▒▓░        |
|     ▓░░▒        |
|  ▒ ▆ ░▒░        |
| ░ ▓ ▓░░         |

I think the problem of memory persists (sensible-chuckle.gif). We’re good at remembering regular pictures, but do abstract pictures count as regular?

Final thoughts

Visual hash algorithms are a novel concept, and may become more useful over time. At the very least the ability to quickly reject that two keys match at a glance is a step in the right direction. The drunken bishop is certainly a fun algorithm to try to improve (while paying absolutely no attention to reasonable constrains like “terminal background color” or “POSIX compatibility”). The drunken slimy worm bishop, with their coins and/or their slime trails is certainly useful for differentiation (just like identicons), but it is not useful for identification. I hope that the distinction between differentiation and identification is clear for users of OpenSSH, but I’m not entirely sure that it is.

Improving the usability of security tooling is as important as it’s ever been; however, the corollary of, “security at the expense of usability comes at the expense of security.” is: usability that creates a false sense of security is at the expense of security. And we must remain mindful that we are not providing usability that is meaningless or (worse) usability that widens the attack surface without providing any real security benefit. Part of that comes from a shared understanding of the available features in the tools we already collectively use. In this way we can build on the work of one another, each bringing a unique perspective, and ultimately, maybe (just maybe) create tools that are usable and secure.

Topographical Sorting in Golang
Tyler Cipriani Posted

I own a fair number of computer books that I have never read from cover-to-cover (and a slim few that I have). I tend to dip in-and-out of programming books—absorbing a chapter here and a chapter there. One of the books I pick up with some frequency is Algorithms Unlocked by Thomas H. Cormen, who is one of the authors of the often cited CLRS which is a hugely comprehensive textbook covering the topic of algorithms.

Algorithms Unlocked, in contrast to its massive textbook counterpart, is a slim and snappy little book filled with all kinds of neat algorithms. It doesn’t focus on any specific language implementations, but rather describes algorithms in pseudo-code and plain English. After an algorithm is introduced, there is a discussion of the Big-O and Big-Θ run-times.

One of the things I like to do is read about a particular algorithm and test my understanding by implementing the pseudo code in some programming language. Since I recently ran into a graph problem while working on blubber —which is a Go project—I figured I’d implement the first algorithm in the Directed Acyclic Graph (DAG) chapter in Go.

Also, since I haven’t written anything on my blog in a while, I figured I’d write up my adventure!

Directed Graphs Represented in Go

The first problem when attempting to create a topographic sort of a graph in any programming language is figuring out how to represent a graph. I chose a map with an int as a key (which seems pretty much like a slice but the use of a map makes this implementation type agnostic). Each vertex n is represented with a key in the map, each vertex that adjacent to nm—is stored as a slice in the map referenced by the key n.

package main

import "fmt"

func main() {
    // Directed Acyclic Graph
    vertices := map[int][]int{
        1:  []int{4},
        2:  []int{3},
        3:  []int{4, 5},
        4:  []int{6},
        5:  []int{6},
        6:  []int{7, 11},
        7:  []int{8},
        8:  []int{14},
        9:  []int{10},
        10: []int{11},
        11: []int{12},
        13: []int{13},
        14: []int{},

    // As yet unimplemented topographicalSort

Topographical Sort

I implemented the algorithm in a function named topographicalSort. The inline comments are the pseudo-code from the book—also noteworthy I stuck with the unfortunate variable names from the book (although somewhat adapted to camelCase to stick, a bit, to Go conventions):

// topographicalSort Input: g: a directed acyclic graph with vertices number 1..n
// Output: a linear order of the vertices such that u appears before v
// in the linear order if (u,v) is an edge in the graph.
func topographicalSort(g map[int][]int) []int {
    linearOrder := []int{}

    // 1. Let inDegree[1..n] be a new array, and create an empty linear array of
    //    verticies
    inDegree := map[int]int{}

    // 2. Set all values in inDegree to 0
    for n := range g {
        inDegree[n] = 0

    // 3. For each vertex u
    for _, adjacent := range g {
        // A. For each vertex *v* adjacent to *u*:
        for _, v := range adjacent {
            //  i. increment inDegree[v]

    // 4. Make a list next consisting of all vertices u such that
    //    in-degree[u] = 0
    next := []int{}
    for u, v := range inDegree {
        if v != 0 {

        next = append(next, u)

    // 5. While next is not empty...
    for len(next) > 0 {
        // A. delete a vertex from next and call it vertex u
        u := next[0]
        next = next[1:]

        // B. Add u to the end of the linear order
        linearOrder = append(linearOrder, u)

        // C. For each vertex v adjacent to u
        for _, v := range g[u] {
            // i. Decrement inDegree[v]

            // ii. if inDegree[v] = 0, then insert v into next list
            if inDegree[v] == 0 {
                next = append(next, v)

    // 6. Return the linear order
    return linearOrder

In our vertices DAG, the only vertices with an inDegree of 0 are 1, 2, and 9, so in a topographic sort one of those number would be first. Running this code seems to support that assertion:

$ go build -o topo_sort
$ ./topo_sort
[9 1 2 10 3 4 5 6 7 11 8 12 14]

In fact, all the vertices with no inDegrees ended up right at the beginning of this slice.

Can you dig it?

DAGs are ubiquitous and have many uses both inside and outside of computers. I keep running into them again and again: I stare this dad-joke cold in the face, once again, this evening.

Algorithms Unlocked talks in approachable language about using a DAG to graph and understand things like the order of operations for cooking a meal or for putting on hockey goalie equipment—I find the plain-spoken explanations charming and helpful. I dig this book, and this is far from the first exercise I’ve hacked through out of it. I’m sure I’ll be picking up this book again sometime in the near future–who knows?–I might even finish it!

Offline spellcheck
Tyler Cipriani Posted

In which I futilely attempt to use aspell to stop using Google as spellcheck.

I am embarrassingly atrocious at spelling. In Vim, which I use for email via Mutt, I can use :set spell. In Emacs I can use flyspell-mode. Browsers all have spellcheck now, seemingly. Still… sometimes I find myself just Googling™ (or DDGing™) individual words as I flail through the darkness that is spelling in the English language. This is stupid and ridiculous for myriad reasons that I don’t really want to talk about.

As is my wont, through force of will, by might of awk, and by glory of xsel: I have written a function in my dotfiles that solves this problem for me. It might even be generally useful, bask in its awesomeness:

spell function in action
spell function in action
spell() {
    local candidates oldifs word array_pos

    # Parse the apsell format and return a list of ":" separated words
    read -a candidates <<< "$(printf "%s\n" "$1" \
        | aspell -a \
        | awk -F':' '/^&/ {
            split($2, a, ",")
            for (x in a) {
                gsub(/^[ \t]/, "", a[x])
                result = a[x] ":" result
            gsub(/:$/, "", result)
            print result

    # Reverse number and print the parsed bash array because the list comes
    # out of gawk backwards
    for item in "${candidates[@]}"; do
        printf '%s\n' "$item"
    done \
        | tac \
        | nl \
        | less -FirSX

    printf "[ $(tput setaf 2)?$(tput sgr0) ]\t%s" \
        'Enter the choice (empty to cancel, 0 for input): '
    read index

    [[ -z "$index" ]] && return
    [[  "$index" == 0 ]] && word="$1"

    [[ -z "$word" ]] && {
        array_pos=$(( ${#candidates[@]} - index ))

    [[ -n "$word" ]] && {
        printf "$word" | xsel -p
        printf "Copied '%s' to clipboard!\n" "$word"
    } || printf "[ $(tput setaf 1):($(tput sgr0) ] %s\n" 'No match found'


Maybe someone can use this ¯\_(ツ)_/¯

The Rsync Algorithm in Python
Tyler Cipriani Posted

I’ve often pondered this great and terrible beast – rsync. It’s spiny, nearly impenetrable command-line interface. Its majestic and wonderful efficiency. The depths of its man page, and the heights of its use-cases.

Leaving aside the confusing implications of trailing slashes, rsync is amazing. The Wikimedia deployment tooling – scap (which at this point has been iterated on for over a decade) – still makes heavy use of rsync. At $DAYJOB - 3, rsync is used to manage a library of hundreds of thousands of flac, mp3, and ogg files. It’s hard to argue with rsync. The amount of network traffic generated via rsync is really hard to beat with any other program.

But what’s it doing?

rsync is fast. rsync is ubiquitous. rsync uses few client resources, and little network IO. OK…Why?

I started reading about the rsync algorithm when a fellow I work alongside began espousing the relative superiority of zsync for the case of our deployment server. Currently scap has a built-in (and quite fancy) fan-out system so as not to put too high of a load on only 1 server; however, zsync flips the rsync algorithm on its head, running the rsync algorithm on the client rather than the server. What exactly is rsync doing that makes the load on the server so high?

The Meat

For the purposes of explanation, let’s say you ran the command: rsync α β.

The rsync algorithm boils down to 5 steps

  1. Split file β into chunks of length n.
  2. Calculate a weak (adler32) and strong (md4) checksum for each chunk of file β.
  3. Send those checksums to the rsync server (where file α is)
  4. Find all the chunks of length n in α that are in β by comparing checksums
  5. Create a list of instructions to recreate α from β


Do it then

I actually think it would have been easier for me to understand a bad python implementation of the rsync algorithm, than to read a tech report on rsync. So with that in mind, here’s a bad python implementation of the rsync algorithm.


First it might be helpful to define my block size, and create a couple of helper functions to create the rolling checksums.

import collections
import hashlib
import zlib


# Helper functions
# ----------------
def md5_chunk(chunk):
    Returns md5 checksum for chunk
    m = hashlib.md5()
    return m.hexdigest()

def adler32_chunk(chunk):
    Returns adler32 checksum for chunk
    return zlib.adler32(chunk)

I’ll also need a function that creates a rolling checksum of a file. The checksums_file function will read in BLOCK_SIZE bytes through to the end of the file, calculate both the adler32 checksum and the md5 checksum for those chunks, and then put those chunks in a data structure.

I’d like a nice interface beyond primitives for both the signatures and the list of checksums – I’ll create 2 objects Signature and Chunks to make that interface. Chunks is basically a list of Signatures with a few other methods for fanciness.

# Checksum objects
# ----------------
Signature = collections.namedtuple('Signature', 'md5 adler32')

class Chunks(object):
    Data stucture that holds rolling checksums for file B
    def __init__(self):
        self.chunks = []
        self.chunk_sigs = {}

    def append(self, sig):
        self.chunk_sigs.setdefault(sig.adler32, {})
        self.chunk_sigs[sig.adler32][sig.md5] = len(self.chunks) - 1

    def get_chunk(self, chunk):
        adler32 = self.chunk_sigs.get(adler32_chunk(chunk))

        if adler32:
            return adler32.get(md5_chunk(chunk))

        return None

    def __getitem__(self, idx):
        return self.chunks[idx]

    def __len__(self):
        return len(self.chunks)

# Build Chunks from a file
# ------------------------
def checksums_file(fn):
    Returns object with checksums of file
    chunks = Chunks()
    with open(fn) as f:
        while True:
            chunk =
            if not chunk:


        return chunks

Now I need a couple of methods to complete the algorithm – one that will find the BLOCK_SIZE chunks in file β that are in file α, and one that will produce instructions that can be used to assemble the new and improved β from the β we’ve already got.

The _get_block_list function will return a list of chunk indices and bytes. The chunk indices are indices of chunks already present in file β (we know from the checksums_file function), the bytes are raw bytes that are in α but may not be in β. If a chunk is found in α that is not in β then the first byte of that chunk is appended to the output list and a checksum is calculated for the next BLOCK_SIZE chunk.

This is why network IO for rsync is so efficient – the only raw data that is sent is the information missing from the remote. This is also why rsync causes higher load on the server than the client – it’s not just checksumming files, it’s checksumming, comparing, and building a diff. And it’s doing that process for every machine to which it is attempting to sync.

def _get_block_list(file_one, file_two):
    The good stuff.

    1. create rolling checksums file_two
    2. for each chunk in file one, determine if chunk is already in file_two
        a. If so:
            i. return the index of that chunk
            ii. move the read head by the size of a chunk
        b. If not:
            i. return the next byte
            ii. move the read head by 1 byte
    3. start over at 2 until you're out of file to read
    checksums = checksums_file(file_two)
    blocks = []
    offset = 0
    with open(file_one) as f:
        while True:
            chunk =
            if not chunk:

            chunk_number = checksums.get_chunk(chunk)

            if chunk_number is not None:
                offset += BLOCK_SIZE
                offset += 1

    return blocks

The poorly named file function (but it’s in the module, so rsync.file is good…right? No? OK.) takes the list of chunk indices and raw bytes from _get_block_list, finds the chunks in β referenced by the index, combines those chunks with the raw bytes from α and returns a string that is the same as file α – it just took a weird route to get there :)

def file(file_one, file_two):
    Essentially this returns file one, but in a fancy way :)

    The output from get_block_list is a list of either chunk indexes or data as

    If it's a chunk index, then read that chunk from the file and append it to
    output. If it's not a chunk index, then it's actual data and should just be
    appended to output directly.
    output = ''
    with open(file_two) as ft:
        for block in _get_block_list(file_one, file_two):
            if isinstance(block, int):
       * BLOCK_SIZE)
                output +=
                output += block

    return output

Creating a python file that imports this script as a module, and invokes file is all you need to actually run it. I wrote a bunch of tests to help me write the script. The core of the test file was simply:

import rsync

if __name__ == '__main__':
    rsync.file('fixtures/foo.txt', 'fixtures/bar.txt')

And at the end of our python-ing, we came to see the rsync beast in a new light – not a beast at all, just a misunderstood algorithm with a lot of command line flags. The End.