An Arduous Endeavor (Part 1): Background and Yak Shaving

Arduboy Angle
This entry is part 1 of 7 in the series An Arduous Endeavor

I recently picked up a low-powered handheld gaming device which has gotten me excited about playing retro games again. Part of the fun of such devices (for me, anyway) is tweaking them and getting them set up just right, and doing that has involved exploring the various discords related to handheld gaming. Which led me to discovering a new-ish handheld system that I hadn’t heard of before: the Arduboy.

Continue reading →

Deleting kafka topics from a consumer group

Kafka consumer groups let you keep track of the latest offsets consumed for a given topic/partition. We ran into an issue recently when we started monitoring the lag for a given consumer group using kafka-lag-exporter, though: if your consumer group has ever committed an offset for a given topic, it stays there as long as the consumer group exists.

We tried deleting it using the kafka-consumer-groups command line tool, but we got a message saying that the operation wasn’t supported by our broker (we are using MSK).

So what to do? Well, I first started by looking into poking around the __consumer_offsets topic, and then noped out of there when I saw that it stores data and keys in some binary format that you need to use a Java class to parse.

The next idea I had was to delete the consumer group, and recreate it, leaving out the offensive topic(s). But we subscribed to a lot of topics! Well, a little bash can go a long way.

#!/usr/bin/env bash
set -o errexit
set -o nounset

export kafka_server=your-kafka-server:9092
export consumer_group=your-consumer-group
export skiptopics=("old-unused-topic1" "old-unused-topic2")

containsElement () {
  local e match="$1"
  shift
  for e; do [[ "$e" == "$match" ]] && return 0; done
  return 1
}

current_offsets=$(docker run --rm --volume ~/kafka-ssl.properties:/config.properties --entrypoint bin/kafka-consumer-groups.sh solsson/kafka --bootstrap-server $kafka_server --group $consumer_group --describe | tail -n +3 | awk '{ print $2, $4}')

echo "Current offsets:"
echo "$current_offsets"

( set -o xtrace; docker run --rm --volume ~/kafka-ssl.properties:/config.properties --entrypoint bin/kafka-consumer-groups.sh solsson/kafka --bootstrap-server $kafka_server --group $consumer_group --delete )

while IFS= read -r line; do
  arr=($line)
  topic="${arr[0]}"
  if containsElement "$topic" "${skiptopics[@]}"; then
    continue
  fi
  offset="${arr[1]}"
  ( set -o xtrace; docker run --rm --volume ~/kafka-ssl.properties:/config.properties --entrypoint bin/kafka-consumer-groups.sh solsson/kafka --bootstrap-server $kafka_server --group $consumer_group --reset-offsets --topic $topic --to-offset $offset --execute )
done < <(printf '%s\n' "$current_offsets")

Running a local Docker registry mirror

We recently moved to a mountainside, and while we are theoretically within the service area of our local cable internet provider, we have been waiting for them to get back to us about installation for over two weeks.

During that time, I’ve been doing my development work using the LTE on my phone (thanks unlimited data from Visible!). However, LTE isn’t really the fastest thing in the world, especially when you’re trying to pull a bunch of docker images as you’re testing multiple local kind clusters.

While I was waiting for another cluster to spin up, I decided to look into running a local Docker mirror. Docker publishes a recipe for this, but calling it a recipe feels a little misleading – there’s no code to copy and paste!

So here’s my recipe:

  1. Get a registry container running:

    $ docker run --detach --env "REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io" --publish=5000:5000 --restart=always --volume "$HOME/docker-registry:/var/lib/registry" --name registry registry:2
    
  2. Update your docker daemon JSON configuration to include the following:

    {
      "registry-mirrors": ["http://localhost:5000"]
    }
    
    
    

Automatically run a task every two weeks without cron

I use fish as my go-to shell – it’s fast, the syntax is more sane than bash, and there are a lot of great plugins for it.

One plugin I use a lot is jethrokuan’s port of z, which allows for very quick directory jumping.

Unfortunately, sometimes I reorganize my directories, and z can get confused and try to jump into a no longer existent directory.

No worries! z thought of that, and provides the z --clean command which removes nonexistent directories from its lists.

But I never remember to run that. Wouldn’t it be nice if I could just have that run automatically every two weeks or so?

Continue reading →

Bring back “Always open these types of links in the associated app” to Google Chrome

I use Zoom for my work a lot. I pass around Zoom links like they’re popcorn being shared at a movie theater. I’ve got them in my calendar, in Slack, and in emails.

I used to be able to click on a link, and the link would open in my default web browser (Google Chrome), and then that would open up the Zoom application.

In Google Chrome 77, Google changed that. Now, I have one more button to click to confirm that I want to open up the Zoom application. There used to be a checkbox labeled Always open these types of links in the associated app, but that went away.

However, there is a hidden preference (intended for policy administration, but usable by all) that can bring it back! Windows users can add a registry entry, but I’m on a Mac. Here’s how a Mac user can do it:

  1. Quit Google Chrome
  2. Open up Terminal
  3. Run the following code at the terminal prompt:
defaults write com.google.Chrome ExternalProtocolDialogShowAlwaysOpenCheckbox -bool true
  1. Restart Google Chrome

Now, when you try to open links, that checkbox will be back. Check it, and you’ll have less buttons to click in the future!

asdf, poetry, tox, and Travis CI

I recently revisited a Python module that I developed called singletons. When I set it up, I tried to follow best practices for lots of things, including using tox with Travis CI to automatically run tests upon push. I used a cookiecutter template called cookiecutter-pylibrary, which set a lot of sensible defaults. And then I took a job where I didn’t do much Python at all.

Well, I’m finally getting back into Python (yay!), and decided to revisit this library. It seems the community is converging on poetry for packaging and depdency management rolled into one elegant tool, and having tried it out a bit, I have to say it’s quite nice. I decided to migrate my project to use this instead of setup.py, and while I was at it I decided to get rid of a lot of extraneous files and make the development and deployment process more streamlined.

I did, however, run into some hiccups getting everything set up to work with the way I do development, so I’m documenting my process here (if only to help my future self).

Python version management with asdf

First of all, there’s Python version management. Once upon a time I used pyenv, but I hated having to install a whole bunch of disparate tools for each programming language I used. Now I use asdf, which lets me use a single command to manage basically every programming language. If you haven’t set up asdf already, here’s a quickstart:

# install asdf and common dependencies
$ brew install asdf \
  coreutils automake autoconf openssl \
  libyaml readline libxslt libtool unixodbc \
  unzip curl

# set up asdf with python
$ asdf plugin add python
$ asdf install python 3.8.0

# install additional versions as necessary
$ asdf install python 3.7.5
$ asdf install python 3.6.9

What asdf does is add itself to your path, so that when you run python (or python3 or python3.8), it will use the version installed by asdf. Awesome! But there’s one caveat – it only uses those versions if you tell it to.

Using asdf versions of Python

asdf does give you the option of specifying a global version of a particular interpreter/compiler to use. However, given that OSX includes a system version of python (and some tools may expect that to function normally), I didn’t want to replace it system-wide. So my solution is to do the following.

In each folder where I’m doing python development, I run an asdf local python command. This creates a file called .tool-versions (which you should probably add to a global gitignore file). asdf refers to this file, and looks up the file hierarchy to find one, to determine which version of python to use.

For example, if I want to use Python 3.8.0, I would run the following:

$ asdf local python 3.8.0

The special trick for tox

tox requires multiple versions of Python to be installed. Using asdf, you have multiple versions installed, but they aren’t normally exposed to the current shell. Enter – multiple versions!

You can use the following command to expose multiple versions of Python in the current directory:

$ asdf local python 3.8.0 3.7.5 3.6.9

This will use 3.8.0 by default (if you just run python), but it will also put python3.7 and python3.6 symlinks in your path so you can run those too (which is exactly what tox is looking for).

Installing tox and poetry

Lastly, just to be safe, you should ensure that each of those asdf versions of python have the bare minimum of dependencies. Namely, tox and poetry.

$ pip3.8 install tox poetry
$ pip3.7 install tox poetry
$ pip3.6 install tox poetry

One other thing – asdf might miss the fact that you’ve installed tox and poetry, so you can run the following to force it to pick up on that:

$ asdf reshim python

Now you should be able to run tox normally!

Travis CI

Last of all, getting Travis to work with all this. It’s actually much simpler than it used to be. With an appropriate tox setup, you can keep your Travis configuration very simple:

.travis.yml

language: python
python:
  - "3.6"
  - "3.7"
  - "3.8"
before_install:
  - pip install poetry
install:
  - pip install tox-travis
script:
  - tox

tox.ini

[tox]
isolated_build = true
envlist = py36,py37,py38
skip_missing_interpreters = true

[testenv]
whitelist_externals = poetry
commands =
  poetry install -v --extras "eventlet gevent"
  poetry run pytest {posargs} tests/

Also, if you have other build stages, like docs, linting, etc., things will become a little more complicated, but hopefully still manageable!

Note that the poetry install command includes some extras. Chances are your library doesn’t have these, but I have some tests that use them. You can probably just do poetry install -v for most situations.

Bonus:

You can update pip for each environment to hide some annoying warnings:

$ pip3.8 install --upgrade pip
$ pip3.7 install --upgrade pip
$ pip3.6 install --upgrade pip

Also, by default, poetry creates virtualenvs in your user directory (~). I prefer to keep my virtualenvs close to the project files, and poetry has an option to support this.

$ poetry config settings.virtualenvs.in-project true
# or if you are running poetry 1.0
$ poetry config virtualenvs.in-project true