Following on from my last post, I’ve been working to track and control the conditions in my tea storage boxes. Initially, I got some cheap units from Amazon. They were basically a humidity/temperature sensor, an LCD display, and a battery. But since I don’t just sit in the basement staring at my tea shelf, I wanted to upgrade to a solution that let me transmit that data to view it more easily.
I’ve recently been getting into collecting and drinking tea. A good friend introduced me to puer tea, which is noteworthy in that it’s designed to improve with age. That aging process, similar to wine, relies on proper storage characteristics. Puer originates in the Yunnan Province of China, and thus the expected aging conditions align roughly with the conditions available there: warmer temperatures and higher humidity, especially compared to my basement where my tea shelf sits.
I could just accept that my tea isn’t going to sit in perfect conditions, but what fun would that be? It seems way more enjoyable to over-engineer some complicated solution.
I’m a big fan of DockerHub’s Automated Builds: they let you tie a git repo to a DockerHub image, so that updating the repo kicks off a fresh build of the image. It further lets you define Repository Links, so that if a parent image is updated, child images rebuild themselves.
In my case, I maintain a base image with the OS and package updates, and then a tree of images based on that. I want to rebuild that image every so often to get updated packages, but that feature is missing from DockerHub. So I wrote an AWS Lambda to handle it.
I’ve been using Terraform for managing my AWS account for a while. It’s pretty snazzy, but there are still a couple of things that Terraform doesn’t fully handle. For example, making an IAM access key in Terraform stores the secret key in the statefile. They’ve added support to store the secret key encrypted with a GPG key, but I’d much prefer to not have it end up in the statefile at all.
A while back, I wanted to do a couple quick things w/ the Slack API. The script I was writing would only end up being run a handful of times, all from my local computer, and I hate having multiple distinct credentials stored in the same place with the same perms, so I hatched a plan: piggyback on the existing creds my browser was using to access Slack.
At some point while working VM images and containers, I ended up wanting some custom Arch packages. It started with a desire for lighter packages and to really understand what was going into my system, and then turned into something of an obsession. As of today, I publish 92 Archlinux packages, most of them custom builds of common Linux tools. And because otherwise I’d be drowning in manual work, I’ve automated the hell out of the process.
I spend a decent amount of time thinking about init systems. Most of the time, that means s6, but for more complex or user-interactive systems, I’d go for systemd. This puts me squarely opposed with a decent-sized group of loud people, it seems. One of the complaints that is occasionally brought up is that sysvinit was great and init scripts are great, etc etc. My hypothesis: for people whose primary init system interaction is writing and using initscripts, systemd unit files are so amazingly easier to read and write and use that it is without a doubt the better choice.
Now that tax day has officially passed, it occurred to me that the best way to celebrate would be to plan for next year. A few of my friends semi-seriously keep tabs on a lightweight challenge: try to break as close to zero on tax day. I don’t know how much actual effort most of them put into this challenge, but I’d average it’s low, because I’ve not historically put effort into it. But the goal is fairly sound: don’t give Uncle Sam an interest free loan of your hard-earned cash.
It’s also dawned on me that, complex though tax code may be, I have a computer and a marginal understanding of math. As such, I’ve set out to code my way to victory.
It seems like every day a new project is released for managing datacenters full of containers, all networked together and serving content to users. I enjoy that aspect of containers as much as the next sysadmin, but I’ve found one of the coolest use cases for them to be repeatable/isolated software builds.
Over time I’ve collected a decent list of codebases I want to utilize, and in the past I would pull and build them on the systems I planned to use them on. I’ve already talked at length about how poorly that scales, but now I’d like to focus on one specific area of the solution: using Docker containers to perform and share compiled software packages.
To say I’m addicted to GitHub is an understatement. But I’ve attempted to focus my addiction towards productive goals, and so I decided that I wanted to process GitHub streak data programmatically. To my dismay, streak data isn’t exposed as part of their API, and my request that they add it was met with polite neutrality. So I set out to see how their site built the streak chart on the user page.