Lane Sawyer🌹

Just trying to leave the world a little better than I found it.

Microsoft Can't Unzip tar Files: My Azure Experience

Recently I was working on getting a basic Node.js REST API running on Microsoft Azure's App Service platform. I've only used AWS professionally, but I wanted to get a sense of what it would take to run a simple website off of Azure so I decided to give it a whirl.

The experience left a sour taste in my mouth and helped me understand why AWS is currently winning the Cloud Wars.

Let's start with the most jarring difference: documentation.

Azure documentation is sorely lacking in discoverability and usefulness. The documentation site is vast and likely contains a lot of good information, but it's organized in a way that forces developers to jump between entirely different sections of their website. And despite how detailed it gets in some places, other times it would inexplicably be missing explanations of configuration settings.

AWS documentation has its own issues, but I can generally trust that a basic walk-through will have everything I need to get started and each setting will be explained in the reference links the documents point you to in each tutorial.

Second issue: Azure's GUI-centric approach.

My favorite part about AWS is that (most) everything can be scripted with configuration files. Azure has the same feature, but I could not find a single tutorial that gave me enough information to write my own configuration file for the service I was using. Instead, I used a tutorial that walked me through building what I wanted in the Azure console and I then exported the configuration file for reproducibility in the future.

Not ideal.

With AWS, the config-first tutorials teach you what each part does and why it's needed, so you can do the whole thing without ever opening the AWS console.

On top of that, the exported config came with a magic token variable to connect GitHub with Azure, but I couldn't find a single reference as to how to change that later or create it to begin with if I were to not use the Azure console to configure my service.

Third issue: Azure can't unzip tar files!

After following a tutorial for making the Node.js REST API (which didn't even work when initially deployed), I noticed the upload of the build artifact to the server was taking over 10 minutes because the build output had not been archived or compressed. So I decided to archive it using the tar format instead of zip (since the build server was a Linux machine) so I could iterate more quickly on trying to get the API fixed, since it still hadn't been working up to that point.

After fixing what turned out to be some issues with environment variables, the API still didn't work. I poured over the code and couldn't find anything wrong. Everything worked on my machine and the templates matched what was in the tutorial, except for the file archive step I added to fix their slow upload speeds (which should've been my first clue). There wasn't anything obvious left to change!

It took a two-hour call with an Azure agent before they finally realized that the Azure service I was using could not open up tar files, so the API wasn't being fully deployed. I needed to create a zip archive instead. So I changed it, and the API started working!

I understand that zip is the archive tool baked into Windows, so naturally Microsoft will prefer using that to other tools. But it's not hard to write code that checks the archive file type and chooses the right tool when expanding the archive, so I was flabbergasted to hear that the archive format is what caused the problem.

If you know me at all, you'll know that I hate Amazon and tolerate Microsoft, so it's weird for me to be endorsing AWS over Azure. But my experience with Azure has really soured my desire to ever try using it again. I know Microsoft really cares about the developer experience in their .NET tooling ecosystem so it's strange that the same care has not been applied to Azure yet. Maybe they'll get to it someday, but AWS is still the easiest to use at this point.

Anyway, that's more words than I ever thought I'd write about file compression tools so we'll leave it at that. I'm just glad it's working now and figured my tech savvy readers would get a chuckle at how one small assumption can cost you hours of your life.

It's Time to Upgrade Your JavaScript Developer Tools

JavaScript is everywhere. As the most popular language in the world right now, it's almost unavoidable. Especially if you're building things for the web. I personally hate JavaScript and tolerate TypeScript, but it's currently the best option for building websites, so I use it every day at work.

While WASM is promising and will eventually bring all major programming languages to the web, we're stuck with JavaScript when writing code for web browsers. But thankfully, all of the developer tooling we use can be written in languages other than JavaScript. Over the last few years this truth has slowly been realized as new developer tools have been written, released, and adopted by large portions of the JavaScript community. esbuild, parcel, rome, and more are taking market share from other tools written in JavaScript because of the pure speed gains that you can get from using fast, modern languages like Go and Rust.

The speed gains are substantial. For example, at work we replaced the following tools with newer ones written in Rust and saw the following performance gains:

  • Webpack (55 second average) to Parcel (23 second average) = 32 seconds saved

  • ts-jest (30 second average) to @swc/jest (15 second average) = 15 seconds saved

While we didn't ultimately choose Rome due to it being a very new library that we want to see develop a bit further before committing to, code formatting tools written in Rust could improve our build times even further:

  • Prettier (14 second average) to Rome (8 second average) = 6 seconds saved

That's a total of 53 seconds saved between the build, running tests, and verifying code formatting consistency. Not bad! This saves workers time waiting for a local development server to spin up (which usually happens multiple times a day, especially if you're working on multiple code bases), plus you get those time savings on every one of your CI builds.

If you adopt these speed gains across your entire organization, that's a lot of time and money saved if you're paying for something like GitHub Actions, where usage is charged by the minute.

Plus, there's the added benefit of (theoretically) saving energy, which in turn reduces your application's overall CO2 footprint. Building and using efficient systems is one of the biggest things the software engineering profession can do to play our role addressing the impending climate change catastrophe.

So why not make the switch?

Obviously there are a few concerns:

  • Familiarity with tools. It can be difficult to learn new systems. Thankfully, many of these projects are written with existing workflows in mind (@swc/jest was literally up and running with a single line of code being changed), but adopting any new tool comes with a new burden of learning its intricacies.

  • Time needed to make the switch. Even though some tools are easy to swap in, not every one will be. I think I spent around a full day of work over three months hacking together the Webpack to Parcel.js migration. It wasn't particularly difficult to do, but if you don't get the time or budget for upgrading tools approved, it's hard to make any changes.

  • Old tech is proven tech. These new tools are untested. What's to say there won't be a major breaking bug that appears later? Do you really want to commit your team to an unproven technology? Again, the ease of changing tools mitigates much of this risk, but it's a risk nonetheless.

If you can get over those concerns, there are likely some newer tools that have come out in the last few years that might be worth a look. They were definitely worth it for us.

If you have any success stories related to upgrading tools in order to save time, feel free to share by reaching out to me on social media.

Remote Work is a Life Changer

Now that I've been working remotely for more than two years, I figured it's worth sitting down to hammer out my thoughts and reflect on what I do and don't like about a fully remote job. I'm going to try to be careful to separate remote work from the realities of pandemic life, but since the pandemic is ongoing, it might be difficult to tease out the differences.

Recently my job has started allowing the technical folks back into the office (the scientists have been on-premise the entire pandemic), but I've only been in a handful of times, mostly to meet up with specific co-workers or attend a company event. Even with the option of going in and sitting at my own desk, I haven't really felt the need at this point, although I'm sure I'll be going in more often as COVID becomes less of an issue.

But despite having a desk and an office, I'm still "fully" remote and I intend to keep it that way. Let's dive into the pros and cons!


Night Owls Rejoice

I'm a night owl in a world built for early birds. Remote work has been a game changer for me. In the before times, I was forced to commute to an office and be there by the unreasonably early time of 8 AM each day for no discernible reason other than "my boss said so". At its worst, I drove 45 minutes each way to a client's work site on the complete opposite side of the metro area for over a year. Waking up at 7 AM to sit in traffic just to be in a cubicle working by myself is the perfect example of how stupidly we had structured software development work up until the pandemic.

Now, I can wake up fifteen minutes before a 9 AM meeting, make some coffee, review my notes, and positively contribute to the meeting objectives. All while actually getting some decent sleep that aligns with my body's needs! I can't imagine what the years of waking up at 7 AM and fighting my circadian rhythm has done to my lifespan. In fact, let's blame that for my hair loss!

My Desk, My Way

After getting a new job and realizing the pandemic was going to last a lot longer than expected, I upgraded from an IKEA kitchen table pulling double duty as a desk to a full-on standing desk setup, complete with multiple monitors, a microphone mount, ergonomic keyboard, comfortable chair, whiteboard, and various office supplies to help me organize my notes.

While I've had some decent desks at some of the more generous client sites I worked at, I've never had a work setup as nice as this one. And now, even if I change jobs in the future, I'll get to keep using the setup I've carefully curated. The first company I worked for was unwilling to spend the money needed for employees to have a comfortable working environment, so now that I know what I'm missing, I regret the six years I spent putting up with cheap office equipment.

Productive Breaks

Back in the dark ages when micro-mangers forced you to come to an office so they could look over your shoulder to make sure that you were wearing the right clothing and doing your job exactly the way they would, breaks were practically useless. The vast majority of them were hanging out in the break room talking with fellow co-workers about nothing in particular. Sometimes I'd take a walk, but most of the time I was in a business office park with very few places to walk. Sometimes there weren't even sidewalks! (Fuck car culture, but that's another post.) If you had personal errands to run, you were out of luck unless they were something you could do from a smart phone.

Basically, breaks were times you sat around dreading going back to work.

Now that I work from home, my breaks are so much more enjoyable! Since software engineering is largely a creative discipline, inspiration comes and goes with the flow of the day. Many times when I'm stuck on a problem, my dog will ask for a walk. That's a great time to take a break, let my brain process the problem, and often I'll have an idea while out enjoying some fresh air! In addition, I can do minor chores throughout the day, use my lunch break to explore and engage with my neighborhood, or have access to my personal computer to play a video game or spend some time working on a volunteer or open source project.


Every Day Is The Same

Granted, every day also felt the same in the office. But at least I saw a complete cast of co-workers, with lots of people I wouldn't have the opportunity to see outside work. That created some fun variety in my day, since I got to swap stories with people completely different from me! In addition, I got to interact with people on the bus or on the highway. I sort of miss feeling the thrill of an idiot cutting you off in traffic or causing a dangerous situation that might kill me in a crash! That's exciting, albeit not particularly welcome.

Now that I'm working remotely, I tend to see the same people. That is if I see anyone at all. I have a couple hobbies like rock climbing that gets me out and about with my friends, but even that starts to feel the same since it's typically the same people participating.

Without a deliberate effort to go try new things, especially activities that involve strangers, each day starts to feel the same as the day before. It wasn't much better when commuting to the office, but it was slightly better.

Getting Stuck Inside

Because I don't have a commute that takes me to a physically different location, sometimes I look up to see an entire day has passed and I haven't left the apartment. This was particularly true when we weren't allowed to leave our apartments due to the pandemic, but even now it sometimes happens by accident when I'm not deliberate about getting outside and doing something that isn't work. Most of the time I'll do a lunchtime and end of day walk with my dog, so this isn't always an issue, but spending so much time in one place isn't very fun.

I could fix this by mixing up where I work. The roof of my apartment complex, one of the dozens of coffee shops nearby, or even my desk at the office are all great options. I just need to be more mindful and deliberate about doing so! Overall, not a terrible "con" to have.

Work is Always Around

When you live in a 500 square foot apartment, there isn't much room for a separate workspace. My wonderful, amazing, perfect workspace I've built does pull double-duty as the desk for my personal computer as well. I'm able to tuck away my work laptop and notes once I'm done for the day, but it's always sitting there and thus is always on my mind to some degree.

Ideally I'd have a separate room for work that would be easy to avoid after I'm done working for the day. It's the main reason I'm considering upgrading to a two-bedroom apartment next time I move, but have you seen real estate and rent prices these days? Even as a well-paid software engineer, I'm not a fan of spending much more on rent than I am right now. That's just the cost of living in a city, I suppose.

I'm sure there's something more I could do to hide my work materials at the end of the day, but no inspiration has struct yet. Something to consider when I next upgrade my desk setup.


Now that I've deftly used the rule of three for each section, we can call this post complete. As you can see, the pros vastly outweigh the cons. I've really enjoyed remote work, and I will never again do five days a week in the office, unless they pay me a ridiculous salary.

Is there something I missed? What other delights or issues have you run into working remotely? Feel free to reach out to me on my blog's guestbook or on any of the social media accounts to share your own experience.

Why I've Yet to Publish a Blog Post on Veganism

Surprisingly, I haven’t written anything about one of the most meaningful decisions I've ever made. About six years ago I became a vegan!

This isn't a secret to anyone who knows me, but I also don't really bring it up unless it's absolutely relevant (like when making sure I'll have food to eat at various events I attend). I would like to bring it up more often, since it's an ethical belief that I hold dearly and I want others to consider making the same choice, but talking about veganism can be a bit touchy.

Part of this is due to the stigma/discrimination vegans face when bringing up the topic. On the Internet you run into a lot of "found the vegan" comments with an eye-roll emoji anytime animal rights come up and a vegan stakes out an ethical claim. Some men find veganism challenging to their view of masculinity. Other folks just don't like to think about where their food comes from and any reminder causes them to lash out at vegans because of the cognitive dissonance they feel when being reminded of the suffering their food choices cause. And a good chunk of people are just afraid of change.

But the biggest reason I haven’t written about veganism is that it can make others feel uncomfortable. Just like religion, veganism carries an inherent “I’m right, you’re wrong” ethical aspect. I’m vegan primarily for ethical reasons, meaning that, in my view, animals should not be exploited in any way, primarily because they can't give consent to sacrifice their bodies for our use. Because it's an ethical stance, my decisions to forgo all forms of animal products as much as possible in my life insinuates that anyone who isn’t vegan is not living ethically, and that message is heard by the other person whether or not I explicitly say it.

Food is extremely important to culture and is a way to bond with others. Rejecting a meal because of an ethical choice implies that the person was unethical for preparing it, and they can take that personally (especially when they don't have a good understanding of what veganism is). I can't tell you how many times I've had to turn down home-baked goods because they weren't vegan. It's not fun, because social norms dictate that you should enjoy the food that others share with you. It marks me as an "other" and someone who has to have their needs specifically catered to in order to participate fully in food-related activities.

Because veganism is an ethical framework, it carries the same social pitfalls as discussing religion or politics. My messy exit from Mormonism taught me that I need to stay quiet regarding sensitive issues if I hope to keep my friends and family around. When I first became an atheist, I shouted it from the rooftops. Through intensive study and thought, I had discovered that Mormonism (and all supernatural worldviews) don't appear to be based in an objective reality. But all that missionary training from growing up in a Mormon household and going to Ecuador to try to convert people to Mormonism taught me that I should be loud and proud about sharing my innermost truth with the world.

So I did.

But when I challenged the worldviews of my Mormon friends and family, I was unfriended on social media, excluded from social events, threatened with expulsion from college if anyone in the BYU administration found out, and even shunned by some family members and in-laws.

I didn’t want to make the same social mistake with veganism. While I had found a wonderful new lifestyle that dramatically decreased the cruelty I inflicted on the world and wanted everyone to know, food — like religion — is a deeply personal subject. People don’t just ditch decades of dietary habits just because a vegan showed them a video of male baby chicks being ground up hours after birth because they can't lay eggs.

Overall, I do bring up my veganism fairly regularly, but mostly in the context of ordering food in a group. If I didn’t bring it up, I often would have literally nothing to eat. So pretty much everybody knows I’m vegan, but I try not to be obnoxious about it precisely because of the social stigma it can cause.

That's the eternal conundrum of vegans. We don't mean to be pushy, but many foods aren't vegan by default. If we don't ask for an accommodation, we'll go hungry. While I typically have an emergency stash of nuts for those situation, sometimes that's just not possible.

All that said, I still feel the need to publish something on the topic. I've been sitting on this blog post for over two years now, usually only updating it after an argument I had online with someone who wanted me to shut up about my veganism. I'm doing that right now, in fact. I expressed disappointment that my favorite writer didn't have any faux leather options for the special edition versions of his books. And sure, asking for a book that "isn't wrapped in the skin of a corpse" is not a tactful, albeit accurate, way to phrase it, but even if I had been more polite about it the downvote police would've come anyway. In my experience, it doesn't matter how I phrase things. Unless I'm in a vegan-friendly space on the internet, any comment tangentially related to veganism is rejected by the larger community. Which means I often don't say anything at all.

But I don't like feeling silenced, so that often leads me to be blunt and do things like describe a leather-bound book as using the skin of a corpse. Is there personal development to be made there? No duh. But fuck, why do I have to be the one to be the bigger person when the default worldview is that it's okay to slaughter hundreds of thousands of adolescent animals each day just to eat?

Time to take a breath, Lane.

Veganism is important to me, and I do wish more of the world would go vegan. But I don't expect it. The world is already full of so much pain and suffering, so I understand why some folks don't care to think about the animals when we still live under a global system that produces unacceptable levels of human suffering.

I don’t expect anyone who reads this to immediately switch to a plant-based lifestyle. I sure didn't. I lived in Dallas, TX when I tried to go plant-based. A place where most folks don't even know what the word vegan means. It took me a good year or more to fully transition, partially because of the hostile anti-vegan, pro-meat culture of Texans, but mostly because it required rewiring some of the most ingrained habits I had in my life.

Really, this post is just getting my frustrations onto paper. It's not fun being the butt of jokes. It's not fun going hungry because there are animal products in every dish at a party. It's not fun being stereotyped as an annoying, loud-mouthed idealist (even though it's very much true in my case). It's not socially fun being a vegan. There's a reason why a good number of my friends are vegan and vegetarian. We have to stick together because nobody else wants us around. I attribute that to the cognitive dissonance people feel about their treatment of animals, and having a vegan or vegetarian around reminds them of that. But there's also the possibility that maybe we are just a bunch of annoying fucks. Regardless, I've found my people and they are wonderful. They make my life so much better, so does it really matter if the wider world is annoyed by our existence?

But today is the day I actually hit "post" on this thing. I'm done leaving this as a draft, as imperfect as it may be. I know this sounds whiny and privileged, because it very much is. I'm a cis, straight-passing bisexual man living an upper-middle-class lifestyle. What right do I have to complain about some dumb folks on the internet or the occasional meal that I choose not to eat?

I know I haven't made a great argument at why someone might want to go vegan. In fact, this post is likely to scare folks away. Who would want to get yelled at on the internet, excluded from food-centric work/social events, or make family dinner more difficult?

But I do invite you to look into it more. There are so many great resources online that cover the what, why, and how of veganism. If you're interested in delving deeper, even if it's just to learn more about veganism so you can be sympathetic to me or another vegan friend you have, I recommend checking out the following resources to learn more or find vegan recipes:


I'm so stoked! Obviously, FIFA is a garbage organization run by criminals, but also... THE WORLD CUP IS COMING TO SEATTLE.

So yeah, mixed feelings, but I'm excited to show the world how amazing our emerald city really is.

When Are We Going to Do Something?

I said I'm not writing about this again, but I will continue chaining together my periodic posts about gun violence in the United States every time something particularly egregious happens again.

We just had the racist shooting in Buffalo. Now we've got the senseless Uvalde, Texas shooting.

At this point I've given up hope that we'll do anything regarding gun control. There is so much we could do without even coming close to running afoul of the 2nd Ammendment, but we don't because our legislative branch has been broken for decades.

But I've already written too many words. I said I wasn't going to write about this again. Please reference my past work. The points I make there still stand.

And fucking call and email your Senators. Senseless death due to lackadaisical regulation of firearms shouldn't be a partisan issue.


We won the CONCACAF Champions League title tonight! It was the best soccer match I've ever attended. We set the CONCACAF Champion League attendance record with 68k+ people screaming out hearts out as we scored each of our three goals to win the championship!

Next up for the Sounders is the 2023 FIFA Club World Cup against some of the best clubs across the world.

And now that our CCL run is over we can get back to focusing on MLS play.

What. A. Game.

What. A. Team.

What. A. City.

I love Seattle.

Hacking Legacy Sites for Fun and (Non)profit


This post is written for an audience of software engineers and assumes general Internet experience. Some definitions are provided below to provide context for those without a background in developing software.


  • GDPR (General Data Protection Regulation): A European Union law focusing on data protection and privacy. California has a similar one called the CCPA (California Consumer Privacy Act). There is no federal law in the USA providing data privacy protection.
  • Cookie banner: Those annoying cookie notifications you get on every new site you visit asking you to choose how closely you want the website to track your behavior.
  • Google Analtyics: Google's analytical platform for tracking user behavior. Used by a mind-boggling number of sites.
  • API (Application Programming Interface): Enables applications to exchange data with each other using a documented interface. A major revolution in computer science that enabled the software industry to grow so quickly.
  • JSON (JavaScript Object Notation): A standardized format for representing a JavaScript data as human-readable text.
  • regex (Regular Expression): An esoteric way of searching through text using patterns. For example, this regular expression was written by Satan himself to match email addresses: (?:[a-z0-9!#$%&'+/=?_`{|}~-]+(?:.[a-z0-9!#$%&'+/=?_`{|}~-]+)|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\[\x01-\x09\x0b\x0c\x0e-\x7f])")@(?:(?:a-z0-9?.)+a-z0-9?|[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?).){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\[\x01-\x09\x0b\x0c\x0e-\x7f])+)])

Recently at work I had to fix a few legacy websites with broken cookie banners after we did a major GDPR compliance effort across all the publicly accessible websites. These sites were initially created 14 years ago and haven't been updated for many years. It's a technological wonder that they're still up and running, but they're still there!

Unfortunately, their old age makes delivering updates difficult. And thanks to some technology choices that broke the modern cookie banner code, there were some updates that needed delivering.

Thankfully, those sites already had Google Analytics. Besides being able to track your every move on a website, Google Analytics has the handy feature of remotely delivering code snippets! That's actually how the cookie banner software is delivered to these old sites in the first place. So instead of trying to figure out how to resurrect extremely old deployment infrastructure, I decided to first try to hack together a solution to fix the broken cookie banner software and patch the website via Google Analytics.

That effort turned into the hackiest code I've ever written. It's ugly, nonsensical without the context of the problem at hand, and uses browser APIs I hardly knew existed.

But it works!

And that was the key point. We have no plans to actively return to those legacy sites and provide new updates. All that mattered is we were compliant with GDPR. Were we actively maintaining those sites or had major rework for them on the horizon, I wouldn't have turned to my hacky solution. I showed what I wrote to a couple of good friends and they were rightly horrified at what I had done.

But again, it works!

So let's take a look at the code.

First up, I added a forEach method to the JavaScript String prototype.


Yeah. It's that bad.

The good news is since forEach on a String makes no sense, the site doesn't already try to do that somewhere, so there are no conflicts!

But when we look at the actual implementation, it gets worse.

Theoretically, in a sane world, forEach on a String might be a method that loops through each character in a string and lets you do something with it. That would make a bit of sense and can already be done in JavaScript, just not using forEach.

But that's not what I did. I discovered that the cookie banner broke because we had a String instead of a JSON object. But Strings can be turned into JSON!

"So", I thought, "what if I turn the String into the JSON object the code expected, then do the forEach stuff that was supposed to happen anyway on my newly created object!"

Turns out, that actually worked 🤣

String.prototype.forEach = function(originalForEachFunction) {
    var stringToJSON = JSON.parse(this);

However, the journey wasn't over. While that fixed the error I was seeing and got the cookie banner to appear, I noticed there was an error when accepting any cookies! Apparently, the cookie banner would make a server call to record what preferences were selected.

I dug into the code and discovered that the network call was failing because the same String I turned into a JSON object earlier was still a string later when it should be an object! That's because the code above didn't actually modify the string at all.

At this point I thought I hit an impasse. There was no obvious way for me to insert myself into the code like I did earlier with my vomit-inducing String.forEach hack.

I let my brain stew on it for a while. That evening, I listened to a new episode of Darknet Diaries, a phenomenal podcast that tells stories about the darkside of the internet, mainly focusing on hackers and computer security. It's one of my favorite podcasts, and it reminded me that I should think like a hacker regarding my cookie banner program.

And what would a hacker do?

Intercept every single network call, look for the data they're interested in, and modify it as needed!

While I typically don't work during the evenings, this problem and idea was burning a hole in my head and I had to try it out immediately.

So there I sat, the faint glow of the computer lighting up my face in the dark room, digging up browser API documentation on how to peek at every network call being made. That night of hacking led me to create this monstrosity, which involves XMLHttpRequest, regex replacements, and lots of null checks (I modified the code to simplify what's going on and to provide some minor obfuscation, so imagine something even worse):

var originalSend = XMLHttpRequest.prototype.send;

XMLHttpRequest.prototype.send = function(data) {
    if (data && data.brokenFieldName) {
        data.brokenField = data.brokenField.replace(/\\\"/g, '"').replace(/\"\[/g, '[').replace(/\]\"/g, ']');
        data.brokenField = JSON.parse(data.brokenField);
    }, data);

It's horrible and I hate that my brain came up with it, but it works!

The best part about it is that I never would've been able to come up with such a bonkers idea earlier in my career. I'm at a point where I feel extremely comfortable with web development technologies, meaning I now understand what is available to me and how I can bend the rules. That kind of mastery feels incredibly good once you're there and the feeling of getting something working in a non-traditional manner is the heart of the hacker spirit. Makes me think I would've had a solid career as a white hat hacker in another life!

Anyway, I hope you hated that code as much as I did. The hack has been humming away in production for a few weeks now and works flawlessly.

And before you ask, yes, I heavily documented what is going on with the hack in several places so that people won't be confused when they find my monster a few years down the road.

Until the next hack,


What Should We Expect From FOSS?


This post is written for an audience of software engineers and assumes general Internet experience. Some definitions are provided below to provide context for those without a background in developing software.


  • Free and Open Source Software (FOSS): Software with published source code that anyone is free to use, study, or modify
  • JavaScript: The world's most popular programming language
  • Node Package Manager (NPM): An online collection of JavaScript code and associated set of tools that software developers use to share their work with others
  • Package: A bundle of code that can have different versions, allowing for software to be updated over time without forcing code using it to immediately upgrade
  • Protestware: A portmanteau of protest and malware, with malware being a portmanteau of malicious software
  • Software License: A document associated with a software project explaining how other developers can use, modify, or share the code

Yesterday, a new vulnerability was reported in the National Institute of Standards and Technology's National Vulnerability Database regarding some "protestware" that was added to a popular JavaScript NPM package that gets about 1 million downloads a week.

The owner of the node-ipc package updated the code to add a 1 in 4 chance of deleting the contents of all the files on your computer and replacing them with the ❤️ emoji if you had an IP address that came from Russia or Belarus. This affected versions 10.1.1 to 10.1.3, meaning a patch version inappropriately delivered this breaking change.

Later, the owner of the package removed this behavior and correctly published a different form of protest as a new major version (11.0.0) that uses the peacenotwar "protestware" package (which was also written by him). Using node-ipc will now put an anti-war message in a text file on the user's desktop, instead of modifying existing files on the user's system. This happens for all users, not just those with IP addresses from Russia or Belarus.

While the more malicious ❤️ emoji update was not available for very long, it still effected many projects and people, including popular ones like the Vue CLI, a developer tool to facilitate building websites. One person even claimed to be part of an NGO that lost thousands of files they were collecting to document Russian war crimes.

This whole thing has caused a bit of an uproar in the online developer community. People are flooding the node-ipc and peacenotwar repositories with issues calling the developer a Nazi or expressing disappointment because the protestware will damage the reputation and trust of open source software. And even more people are watching the deluge of comments with interest, since this is not the first time a developer has updated a popular NPM package to send a message to the broader software development community.

As a software engineer myself, I fall into that last group of interested spectators. All this has been fascinating to watch and has led me to closely examine my beliefs about what it means to use and develop Free and Open Source Software (FOSS) and how I can prevent something like this most recent NPM issue from affecting my team.

So with that context, let's dive into the actual article: What should we expect from FOSS?

Software Licensing

First, let's start by looking at how software licensing works in the open source community, and whether this particular protestware broke the terms of its license.

The license for node-ipc is the popular and flexible MIT license, which offers the software "as is", to be used however the user wants. peacenotwar is licensed under the stricter GPL-3.0 license, which requires any modifications to be published under the same license and the source code be made available.

While I'm not a lawyer, my understanding is that both licences absolve the developer of any liability for issues that arise from using it. This is common in licenses often chosen by open source software, so it's not surprising to see them in this case. But many of the people upset about node-ipc seem to not understand that downloading software from a random person on the internet comes with no guarantees, especially given the MIT and GPL-3.0 licenses attached.

From my perspective and experience, node-ipc and peacenotwar are following the terms of their license, even while providing undesired functionality in an updated version of the node-ipc package.

What can this tell us about open source software?

To put it harshly: you get what you paid for and this software was free.

Open source is about making sure the source code is easily accessible. It has nothing to do with quality. For every amazing piece of open source software, there are hundreds of awful ones. I should know, since I've written some of the useless ones! All one has to do is look at the GitHub profile of a random developer and you'll stumble across a pile of code that is technically open source, but it is not (and never will be) worth using.

The lesson here is: understand that open source software licenses promise you nothing, other than that their source code will be publicly available for examination.


So if there isn't an open source license that protects the user from malicious code updates, what could prevent open source software from delivering malware?

Versioning. Theoretically.

In an ideal world, every update to software would be closely vetted by a team of experts who verified it behaved correctly before being published for the world to use. In that perfect parallel universe, even if a malicious update got past the expert team nobody would download that update before checking it themselves and it would never be set to update to an unchecked version automatically.

Alas, we do not live in such a paradise.

NPM uses Semantic Versioning, which is a widely used standard for labeling new versions of software. But it's just a convention, so there is nothing preventing a developer from breaking the rules when creating new versions. That's what happened with node-ipc, since it introduced the file-destroying protestware as a "patch" update. Patches are used for non-breaking changes like fixing bugs or make updates that do not break anything for the end user.

Clearly, wiping files on the computer is a breaking change, so the owner of node-ipc broke the versioning "contract".

Software development relies on an incredible amount of trust. When you use someone else's software, they often have used some other person's software to create it. This leads to a long chain of dependencies, meaning your website to share pictures of cute animals was ultimately created by the work of hundreds or thousands of people. That trust and sharing of quality software is a major part of why there's been incredible growth in the tools available to software engineers and the resulting applications being produced.

But it does have its downside, which was clearly on display with the node-ipc update.

That trust is exploited by the default behavior of NPM when adding new software dependencies. NPM uses the compatible with version by default when determining dependencies, which will apply new patch versions for packages automatically when running a very common NPM command (npm install). While this can be helpful for quickly distributing software updates like bug fixes or performance improvements, it should not be the default precisely because people can abuse Semantic Versioning.

Because of the default behavior of a widely used tool, any developers that did not take the extra time to lock their package versions could have woken up a few days ago to a hard drive full of ❤️ emojis.

Engineers should take the time to understand the tools they are using and how software versioning behaviors could impact their code, but the reality is that most don't. Take me for example. I didn't completely understand how versioning worked in NPM earlier in my career even though I had been using it for years and I'm the kind of person who enjoys reading software documentation for fun! Many software engineers face tight deadlines. Unfortunately, things like dependency analysis and reviews don't happen for a good portion of newly written software.

Looking back at node-ipc's versioning, there is now a version 11.0.1, which is a new major version that prominently states that the tool now contains the peacenotwar package, which is far less malicious than the original protestware. This is versioning done properly. While the new version still delivers unwanted functionality, at least node-ipc is now following versioning standards when making noteworthy changes.

The lesson here is: lock your dependencies and review any software upgrades closely. Open source software does not guarantee that there will be working software or proper versioning. The whole point is to be open and free to everyone and that includes incompetent or malicious actors. You really should vet any new code you did not write yourself before using it.

Is Protestware A Good Way to Protest?

Part of why I wanted to write an article examining this incident and how it relates to expectations in open source is because of the word "protestware". That's a new term I hadn't stumbled across before, and it seems like it's new to most of the wider development community as well.

The situation between Russia and Ukraine is incredibly hard to watch, and I feel deeply for the people of Ukraine who are being unjustly invaded by an autocrat trying to leave his mark on the world. I've got a tinge of fear because I live in Seattle, which could become a target if Putin decides to whip out the nukes. When I saw that a decently popular package on NPM decided to create some havoc for Russian users, I initially chuckled and thought that was a clever way to make a statement. The idea of protestware inherently appeals to me. Especially when used for a cause that I believe is morally just!

I imagined some Russian hacker following Putin's orders to hack a US power plant waking up one day to nothing but ❤️ emojis, ruining his whole day and screwing up his spy work. That's an incredibly satisfying image. I'm having another laugh imagining it just now.

But that's not the reality of the situation.

Internet attacks know no borders. It's entirely possible that some grandma living in Canada got hit because her ISP just bought some IP addresses that used to be located in Russia. Or (if that NGO claim I mentioned earlier is true) some desperate Ukrainian's reporting of a war crime is lost forever because they died from a bomb the next day. Or a Russian anti-war activist loses a valuable spreadsheet containing the contact information for a nationwide network of activists. Or an MIT software engineering student is using a VPN to watch some Russian soccer games and runs the protestware, losing his entire dissertation.

There's so many ways the initial node-ipc protestware could've hurt innocent people.

Which puts me in an interesting position regarding how I feel about it.

Governments have imposed economic sanctions on Russia. Companies have pulled their business. The global banking system kicked Russia out of SWIFT.

All of those actions hurt innocent people too, but I largely agree with what's being done to dissuade Putin from continuing his invasion. While economic sanctions will hurt Russians who bear no responsibility for what's going on, they are less damaging than a full-on war.

So why can't an individual make a similar choice to attempt to inflict non-physical damage on Russia?

I lean towards supporting the idea of protestware in general, and tolerating this particular situation. The developer screwed up by introducing the file-modifying change as a patch version instead of a major one and not disclosing the change. That broke the social contract for delivering open source software and will damage his credibility going forward. But philosophically he has free-reign to do whatever he wants with the open source software he created, so it's hard to completely condemn him for trying to do his small part in protesting the Russian invasion of Ukraine using the skills he has at hand. It's something that could have caused real damage, though we'll likely never know the true extent. I wouldn't condone this particular functionality change, since I think there are less-damaging ways to get the same message across.

The updated version that leaves an anti-war message on a user's machine is a much easier call for me.

I think it's a brilliant way for a software engineer to make themselves heard. But there is no doubt that it would be incredibly annoying for those using that software. That is, after all, a major point of protests. They don't work if nobody notices!

However, were I using the node-ipc project I would have lost respect for the developer and the entire project because of the protestware. I get why people are incredibly upset enough to the point of spamming the node-ipc repository with angry and hateful issues directed towards the developer, even if I think many of the messages go too far and constitutes online harassment. I don't envy him trying to clean it all up and move on from this either.

Overall, I'm going to lean on what seven years of consulting taught me. The answer is: "it depends". There is a proper place for protestware. Software is a form of speech, so I think it should be protected to a reasonable degree, which includes forms of protest. Just as there are bad and good ways to hold an in-person protest, that holds true for doing it in the form of software. That line will no doubt be difficult to walk, as it is for any protest.

What Should We Expect From FOSS?

By this point, I hope I've convinced you that open source software is a grab bag that promises you nothing and everything all at once.

I love software engineering precisely because of open source. I know of nothing like it in human history. Millions of hours have been dedicated to creating software that is given away for free, to be remixed and built upon. That has led to some incredible leaps in digital technology over a few short decades. FOSS, as a concept, is a technological marvel that should be up there in importance next to the discovery of fire and agriculture. It has the potential to radically transform the world. For good, or bad. Just like any powerful technology.

But those lofty expectations should come with a dose of reality. As we saw with node-ipc, there's danger in blindly accepting open source software from other people before reviewing it yourself. The problem comes from making that a reality. Software engineers use so much software that it would be practically impossible for every developer to understand every dependency change.

It would be great for tools like NPM to make changes that prevent malicious or undesired updates from occurring in the first place. That's something we can push for in the open source community. Software engineers never met a problem that couldn't be solved with more software! 😂

Until we get immaculate tools that save us from ourselves, here are some specific actions that can be taken to secure our projects from being impacted by this kind of protestware in the future:

  • Get your software from respectable institutions that have a track record of releasing quality code.
  • Lock your dependencies so that you are only ever making a conscious decision to upgrade.
  • Review release notes for any new code you are including in your software.
  • Contribute to open source software by writing good code or reviewing the code of others to make sure it's working as expected.
  • Write your own code where possible. While you don't want to reinvent the wheel, be deliberate about what software you are using.
  • Learn about the tools you use and how they work. Don't forget to think about potential attack vectors!

In conclusion, we're probably going to see a lot more protestware in the future as software continues to be an ever larger part of our lives. The node-ipc issues remind us all that open source software offers no guarantees. While FOSS is amazing, its downsides should be recognized and considered when choosing to use new open source software. Security teams need to become more commonplace in the industry, and better ways of establishing and maintaining trust for FOSS developers and users would make it easier to sleep at night when updating your dependencies.

Ultimately, it's up to software engineers to protect their systems from malicious actors. To do so means understanding where FOSS code comes from and using that knowledge to set realistic expectations for what open source software can do for us.

Pandemic Life: Year Two

Year two is over!

I figured I would write a follow up to last year's post about what it's been like to live in a pandemic. I was desperately hoping there would be no need for a second one because the pandemic was over, but here we are.

Thankfully, the naive optimism of my first year post largely worked out despite the pandemic entering its second year of changing the world. I was fully vaccinated in May and got my booster in December. Thanks to that I was able to see friends and family way more often than in 2020.

While there was still considerable risk in 2021, it wasn't as terrifying to go out, especially when the people I was with were vaccinated. As far as I can tell I didn't get COVID despite having done all of the following with friends and family:

  • Regularly rock climbing at the gym (I climbed my first V4!)

  • Snowboarding trips to Snoqualmie

  • Camping trips to the Olympic Peninsula

  • Spending over a month in Utah, with frequent family events

  • Dining in at restaurants

  • Regularly attending D&D and board game nights

Even with that fairly busy list (as least for an introvert like me), I only ever had some slight sniffles and aches once or twice over last year but I never tested positive for COVID. Seems like the vaccine worked pretty well!

Just in the last few days my county has removed its mask mandate. Feels incredibly weird walking through my apartment building without a mask, and I'm still wearing it when I go to somewhat crowded places. But it seems like we're heading in the right direction. The worst of this is (hopefully) over.

When does this thing officially become an endemic? That change would be nice. I know other parts of the world are not doing as well as my neck of the woods though, and I hope they can get all the resources they need to finally wrangle COVID down to endemic status.

But despite all the ups and downs, crazy news stories, a budding war in Ukraine, and countless other awful things that happened around me in 2021, this year of the pandemic was definitely better for me than the first.

I'm really hoping I don't have to write another one of these next year. 🤞

Static Code Analysis: Reducing Your Team’s Cognitive Burden

Have you ever run into a pull request that seemed impossible to merge? One with hundreds of comments from a dozen people, with two folks passionately arguing about choosing variable names, which language features to use, or whether to delete that unused method that might get used someday. Nobody can seem to agree on a set of standards, and with no ultimate authority to turn to, the code review devolves into a contest of wills.

Those pull requests from hell result in a lot of wasted time for a software engineering team. Don't you wish you could harness that extra time and funnel it back into building a quality product?

That’s where static code analysis comes to save the day!

Static code analysis is the process of analyzing source code against a standard set of rules. These rules vary based on programming language, business domain, and team preferences, but practically every major programming language has a decent static analysis tool that can be added into your team’s regular workflow.

Static code analysis can be accomplished with a variety of tools and methods. This article is going to talk about just two of them: types and linting. If you don't have either added to your team's workflow, those two are a great place to start.


Programming languages can generally be separated into two camps: those with strong types and those with weak ones.

Strong types include languages like C++, C#, and Rust. Weak types can be found in languages like Python and JavaScript.

In general, types are a way of structuring the data in your code and are checked at compile time. This means bugs related to the type of data you're manipulating are caught up front, as part of the development process. A weakly typed language leads to bugs that happen at runtime, which can lead to a bad user experience or errors in production environments.

Some weakly typed languages have ways of adding in types, so don't despair if your team is already using a weakly typed language. TypeScript is a great example that extends JavaScript to include types. If your tech stack has a way of using types, you should absolutely be using them!

Some programmers, especially those who have never used types, can be hesitant to add them to their codebases. It's one extra thing to learn, and when you switch from being able to run your code immediate to having a compiler yell at you before you can even run the code, the experience can be a bit jarring.

But it's totally worth the upfront cost.

Let's look at a simple example of fetching data from an API in JavaScript:

function fetchData(id) {
    return fetch(`${id}`);

function doSomething(id) {
    const data = fetchData(id);

    // what can we do with data?

Do you have any idea what sort of data you'll be getting from the server? Even if you remember right now, will you be able to answer correctly a year after writing the code? Our brains are not perfect records of everything we've done, so at some point you'd have to look at the documentation (if there even is any) or hit some breakpoints while running the code to figure it out.

But sprinkle some TypeScript in there and life gets so much better:

interface MyApiResult {
    id: number,
    name: String,
    address: String,
    city: String,
    zipCode: String,

function fetchData(id: number): MyApiResult {
    return fetch(`${id}`);

function doSomething(id: number) {
    const data = fetchData(id);

    // We can easily use anything listed in the MyApiResult interface!
    console.log(`Hello ${}. How is ${city} these days?`);

Now we can immediately see that fetchData will return some basic user information. While this example is a bit contrived, having a whole team working on a codebase and not being able to immediately see what fetchData does results in a bunch of wasted time looking at documentation or manually running the project and triggering the workflow that runs the code.

Types are the most important type of static analysis, especially as team size grows. Programming is all about manipulating data in a computer, so why shoot yourself in the foot by writing code that ignores what that data looks like?

Save your team brainpower for problems more important than the shape of your data and get yourself a language with a type system!


The other major piece of static code analysis worth adding to your team's workflow is a linter. Linting is the process of analyzing code for bugs, performance issues, proper use of language features, and stylistic choices to ensure code consistency.

Most modern languages have some sort of linting system. Some are built into the language, like Rust's cargo clippy command, while others arise from community efforts, like JavaScript's eslint.

However, initially setting up a linter can be difficult to do on a team. Remember those arguments about code style or the proper language features to use in PRs? A linter codifies that into a standard set of rules that everyone's code can be checked against. So the team will have to agree on what those rules should be and then the computer can enforce compliance with every new addition to the codebase.

The biggest gain from a linter is consistency. Even if you don't like particular linter rules, your team doesn't have to argue about what the code looks like during every pull request. A good team is full of people who will value consistency over the "perfect" linter configuration, so you should strive to pick sensible defaults that everyone can live with. Using a popular configuration is one way of quieting even the noisiest developer, since a configuration that's good enough for hundreds of thousands of other people will be good enough for your team.

Once a linter is installed, make sure it runs automatically and that you have gates in place to not merge any new code until the linter is happy. Without a hard blocker, linter errors can and will seep into your code over time, eventually leaving you with thousands of errors or warning that end up getting ignored by the team instead of addressed. This leads to code rot, performance issues, and a generally unpleasant developer experience when you're faced with a wall of doom anytime you see the linter run.


Programming is a creative endeavor, and human brains only have so much capacity each day. By eliminating thought from entire classes of issues, your team will be free to focus on the things that truly matter: solving problems that users of your system face.

A strong type system and sensible linting rules are two great ways to reduce your team's cognitive burden, allowing you to get more done with less time. Automation is the name of the game in software engineering, and having a computer check code against a set of rules is the perfect use of CPU cycles.

Don't spend your precious time arguing over pointless semantics. Use static code analysis tools.

This is the fifth of nine articles delving into the processes that every effective development team should use. Stay tuned for more!

Book Review: This Is How You Lose the Time War

Go read it.

This Is How You Lost the Time War is one of the most beautifully written pieces of fiction I've ever read. I even read parts of it out loud because the words were that delicious.

I don't read out loud.


I loved this book too much to write a detailed review. I'm still reeling from the experience and I can't wait to read it again.

In short, it's a love story scattered through time and space, giving you a peek into the worlds of two intergalactic time soldiers while leaving a tantalizing universe hidden between the lines on every page. It's a intimate tale of godlike spies who find themselves having more in common with each other than their own communities and how they hide their budding relationship from their own hivemind-like transhumanist(?)/alien(?) collectives.

I'm at a loss for words because nothing I write will ever be as gorgeous as the poetry within its pages.

This Is How You Lose the Time War is a lovingly crafted puzzle-box of a novel that deserves a place on your shelf.

Go read it.

Yew Hooks with GraphQL

Over the last year or so I've been occasionally hacking away at a web app called Dicebag, which will eventually become a collection of useful tools to facilitate in-person Dungeons & Dragons games.

Part of this project stems from my lack of satisfaction with other tools I've found. Most tend to focus on running a game online or preparing for games in advance. I'm wanting something that enhances the player and DM experience by presenting contextual data depending on what's happening in the game, keeping players off their phones and engaged in the story.

I'm a React developer by trade but a Rustacean at heart, so I decided to write it using the Yew framework, one of the more popular Rust web frameworks. It's been really fun so far! The app is ugly and non-functional except for a janky initiative tracker I just put in place, and even that is far from polished.

Regardless of the messy code and unpolished UI/UX, it felt great to put together a useful, generic custom hook for making GraphQL requests using Yew and the Rust graphql-client crate.

This post is a short walk-through on the anatomy of my custom GraphQL hook and ways I'd further like to improve it.

So, let's take a look at the hook! The code below is heavily annotated with comments I've added for the purposes of this blog post to explain Rust concepts, the libraries I'm using, or things I'm particularly happy with!

First up, the example GraphQL query we'll be working with:

# Query to fetch a campaign by ID. If none are provided, return all campaigns
query CampaignsQuery($campaign_id: Int) {
    campaigns(id: $campaign_id) {

Now an example usage of the use_query hook:

// Example usage of the campaigns query within a Yew functional component

pub fn campaigns_page() -> Html {
    let variables = campaigns_query::Variables { campaign_id: 1 };

    // I'm particularly happy with the user experience on this hook.
    // All you have to do is choose the query you want to make by specifying
    // the generic parameter's struct and pass in the variables for that query.
    // Can't get much simpler than that!
    let query = use_query::<CampaignsQuery>(variables);

    // ... use the query results to display campaign #1

And finally, the hook code itself:

// The code for the use_query hook

// `graphql-client` crate builds all the types for you just by looking at the
// GraphQL server schema (which is auto-generated with a CLI command)
// and the query you wrote (which was the first code block in this post)
    schema_path = "src/graphql/schema.json",
    query_path = "src/graphql/queries.graphql",
    response_derives = "Clone"
pub struct CampaignsQuery;

pub struct QueryResponse<T> {
    pub data: Option<T>,
    pub error: Option<String>,

// The query itself! There are three trait bounds, all related to the
// graphql-client crate types. The `Clone` and `'static` bits are needed
// to fulfill the lifetime requirements of the data here, since this is
// going to be used with in the context of a Yew functional component
pub fn use_query<Q>(variables: Q::Variables) -> QueryResponse<Q::ResponseData>
    Q: GraphQLQuery, // GraphQLQuery is the trait provided by the graphql-client crate
    Q::Variables: 'static, // That trait also provides a way to specify the variables
    Q::ResponseData: Clone + 'static, // And the type you expect to get back
    // Local state to keep track of the API request, used to eventually
    // return the results to the user
    let state = use_state(|| QueryResponse {
        data: None,
        error: None,

    // Now we get to the part of Yew that isn't so nice. I've got to clone 
    // the state so I can move it into an asynchronous thread, since Yew hooks
    // can't do async without spinning up a local thread
    let effect_state = state.clone();

    // This works identically to React's `useEffect` function
        move |_| {
            // As stated earlier, we spin up a thread in order to use
            // the asynchronous API call code
            spawn_local(async move {
                // `build_query` is another nicety provided by the GraphQLQuery type
                let request_body = Q::build_query(variables);
                let request_json = &json!(request_body);
                // reqwest is a nice Rust http client
                let request = reqwest::Client::new()
                // Set the data or errors as the results dictate
                match request {
                    Ok(response) => {
                        // Turn the response JSON into the expected types
                        let json = response.json::<Response<Q::ResponseData>>().await;
                        match json {
                            Ok(response) => effect_state.set(QueryResponse {
                               error: None,
                            Err(error) => effect_state.set(QueryResponse {
                                data: None,
                                error: Some(error.to_string()),
                    Err(error) => effect_state.set(QueryResponse {
                        data: None,
                        error: Some(error.to_string()),

            // The "cleanup" function, just like in React's `useEffect`
            // Since there's nothing to cleanup here, we write an empty function
            || ()
        // The `useEffect` dependency here is `()`, the unit type, which is
        // equivalent to passing `[]` in React's `useEffect`

    // Return the state's value to the user so they can use the API result!

Isn't that cool? It has a simple API that I'm excited to use. Writing it felt similar to React with some pain points that come from Yew being a developing framework and the verbosity type system in Rust, but I'm quite enjoying the development process in this tech stack.

Writing the hook took me a few iterations to get the API right, since I'd never written much Rust code dealing with generics and trait bounds. In fact, as of time of this writing you can see at least one older version still in the codebase because I haven't migrated everything over to the new and improved one yet.

Initially I had my own Response and Query types with weird lifetimes that were annoying to write and use because I didn't understand that I could dig into the ResponseData type on the generic Q trait with the GraphQLQuery bound. Going through this exercise forced me to better understand lifetimes, Clone, and generics, so I'm happy I spent the time iterating on it.

Potential Improvements

loading Field

Some GraphQL hook libraries provide a loading field on the data structure so you can tell if you're still waiting on the API. I'm conflicted on adding this, since you can discover if the API has returned by checking if data or errors is a Some value.

But it's not hard to add and simplifies if statements for users of the hook so I'll probably add it in once start using the hook more heavily and feel that annoyance myself.

Improved Errors

Right now I'm just smashing the errors into a string. Ideally I'd return them in a structured manner, but I just haven't gotten to that yet.

Refreshing the Query

Given that the use_effect_with_deps has a () as its dependency, this query will only run on the first time the component using it renders.

Ideally I would have better control over when the query refreshes, especially in scenarios where you add something new and want the UI to update. It might be easier to just pair it with another hook that lets you refresh the whole component, or maybe it's a new parameter to the query.

Time will tell. I'm not nearly close enough to caring about that kind of thing in the Dicebag app yet!

Support For Any GraphQL Client

Right now it only works with the structs produced by the graphql-client crate. That's what I use in my project, but if I were to export this hook for general use it would be nice to switch up the types as needed. I'm not even sure I can make the hook that generic, but it would be a useful learning opportunity to stretch the bounds of generics until they break.


Yew's hooks are fun! Writing my own taught me a lot more about Yew as a framework, generics, trait bounds, lifetimes, Rcs, and more.

Yew is still developing as a framework, but I'm excited to see where it goes. It already rivals React and other top JS frameworks in terms of speed, and that's with a small volunteer community working on it. WASM has a bright future, and because of that, Yew has an opportunity to play a big part in the Rust web development space. I enjoy working with it so much that I'm hoping to contribute to the project myself. And if I'm lucky, maybe I'll even get paid to write Rust on the front-end someday!

If you have any feedback regarding the hook or this post, feel free to open an issue on my repository or reach out to me on the social media platforms on my About Me page!

2021 Year In Review

As the first year of the decade comes to a close, I can breath a sigh of relief. While 2021 wasn't great, at least this wasn't 2020.

Personally, I had a pretty solid year. As a country and global society, things could've gone much better.

Let's get the global bad out of the way first:

  • The January 6th Insurrection, which will be discussed as one of the lower points in US history for decades
  • Carbon emissions went back up after a slight lull from the pandemic
  • The pandemic remained a pandemic, even after an absurdly effective vaccine was quickly created
  • Seattle had an election that somehow resulted in us electing a Republican who joined the GOP after Trump took it over
  • Dramatic, unseasonable, and deadly weather events happened in every place I've ever lived, including the hottest day ever recorded in Seattle and the coldest day in the last 30 years
  • Breath of the Wild 2 was not released
  • Many other things that kept me awake at night that I've apparently successfully forgotten

But after a year like 2020, I've learned to manage my existential horror so while the world may be literally burning all around me, at least I can enjoy my life and hang on to the sliver of hope that we can turn this train around before we go completely off the rails.

And with that nasty list out of the way, it's time to be positive and do some navel gazing.

Every year since 2015 I've put together a "52 Things" list. I stopped publishing them publicly after a couple years, but I've continued the practice of setting 52 goals across some fairly consistent categories:

  • Personal
  • Health
  • Finances
  • Social
  • Experiences
  • Media
  • Work

In 2021 I finished all of my social goals, all but one of my media and financial goals, and a spattering of others for the remaining categories. All together, I finished 24 of the 52! That's not too bad, and is line with most other years (with the exception of 2020 because COVID blew up my ability to do a lot of things I had planned).

Some highlights from my goals this year are:

  • Finishing 77 books
  • Climbing two V4s at the bouldering gym
  • Making a new friend, even during a pandemic
  • Hitting all of my financial savings goals
  • Getting vaccinated

Outside of goals my year also went really well.

Work was a big part of why 2021 was a good year for me. I really enjoyed settling into my role as a software engineer building websites to display scientific data. I've never been this happy with work before, and I wish I would've gotten out of consulting way before 2020. I'm now getting paid my highest salary ever! It's weird that I got a pay increase for working in the non-profit industry... It's almost like my skill set wasn't valued and I wasn't compensated fairly as a consultant (but that's a post for another day). Working in the non-profit industry has been rewarding, and I have a great team of people I work with every day to build quality, useful software. Plus we have a great work/life balance culture of working hard but calling it quits at the end of the day. While I still don't get to program professionally with the Rust programming language, I do enjoy what I do and that makes life much better.

Outside of work I tried to keep myself as busy as a pandemic would allow. I'm the Dungeon Master for a D&D group with five of my friends, and we had a good six or seven sessions throughout the year. Building a world and seeing others explore it is satisfying, and I like to think that I'm getting better at running games so they stay fun for everyone.

In addition to sitting around a table telling stories, I climbed regularly with a group of friends and while my waistline is not in great shape, I'm climbing at the highest level of my entire life! My goal is to move up another difficulty level in 2022, which will require slimming down a bit so I'm excited for that.

I also reconnected with my ex-wife after three years of space and we got back together, to the immense delight of our dog Kaladin. While there's been some bumps in the road as we get to know each other again, it's going really well so far and being with them makes me happy!

Finally, I got to spend a lot of time with family. Between a two week vacation in the summer and a month at my parents house this winter, I've gotten a lot of face time with immediate and extended family. Living in Seattle means I don't see them as often, since most of my family is in Utah. I'm grateful my profession allows me the flexibility to work from just about anywhere, and long trips to see family will likely be something I do every year from now on!

So yeah, 2021 wasn't too bad. I've got a new batch of goals that I'm excited to work on, and I'm optimistic that I can get more than half finished this year!

Hope you all are as excited for the coming year as I am.

We've got this.

Government as a Service (GaaS): How the Federal Government Could Streamline State Management

Last week, the Missouri governor showed the world his technological illiteracy by vowing to prosecute a "hacker" that brought a major data leak to the government's attention. The entire tech community had a big laugh, since the government itself was sending Social Security Numbers to users that could be easily found with the barest modicum of tech know-how.

The governor's public blunder never should have happened. The fact that he publicly stated his ignorance in such an embarrassing manner demonstrates that nobody in his advisory circle knew enough about technology to tell him to stop. Nobody he knew understood that it was the government's mistake, even though the data breach was responsibly reported.

It's not a big leap to assume that nobody competent is leading Missouri's technology departments. I shudder to think what else in the state is wide-open for attackers.

Sure, it's easy to call out government incompetency (especially when it comes to technology). It's practically an American past time. But things like this keep happening and we should keep getting upset until the issue is solved.

Securing IT systems is no trivial task, and we make it incredibly difficult on ourselves due to the very structure of the US federal government system. States have an incredible amount of power, which means the United States has about 50 different ways of doing any one thing when it comes to running the state IT systems. That's a huge attack surface for malicious actors to find their way into.

But regardless of which state we're talking about they all need to do similar things that involve information technology. Here are just a few things I could think of off the top of my head:

  • Legislation
  • City planning
  • Taxes
  • Voting
  • DMV
  • Communications
  • Infrastructure maintenance
  • Citizen feedback
  • COVID reporting and notifying

I could keep going.

So why are we creating 50 different IT systems for these? As a small example, I live in Washington, which has a great legislation system that even allows citizens to provide feedback on bills. Looking at the same type of site from Texas, the last state I lived in, their legislation system leaves much to be desired, especially because there's no way to provide feedback on the very bills you're searching.

I'm sure both state's IT departments (or potentially hired contractors) put a lot of hours into these systems. It's great they're available, but sad that my friends in Texas don't have the same tools of democracy I have. And looking back at the utter incompetency of Missouri, many of these systems across the US were likely built on a shoestring budget by people who don't have an understanding of IT security.

All this leads me to ask: why aren't states working together to provide a great, secure technology experience for their citizens?

I argue that our federalist system discourages coordination, at least when it comes to IT systems.

One benefit to the federal system is that states get to be "laboratories of democracy". Each state can adapt their laws to its specific citizens, with a federal government theoretically providing a common floor of basic human rights that every state has to provide. Sometimes those "experiments" do leak over to other states, until things that used to be unthinkable (gay marriage or cannabis legalization) are essentially the law of the land, even without federal support. That can be a pretty great way to run a country, but it does have its pitfalls. One of which is the fragmentation of technology solutions, further exacerbating our already inefficient bureaucracy.

Maybe I'm just ignorant, but I haven't seen collaborative thinking when it comes to building and running the information technology powering our state, county, and local governments. Part of it is likely because the Internet and supporting technologies are relatively new and the machinery of government moves deliberately slow. Another part is that private industry sucks up the best IT talent just to put them to work on milking a few more dollars out of ad clicks instead of positively contributing to society. And yet another is because the one government body in place to facilitate coordination between states simply hasn't done it yet!

Now is the perfect time for the US Digital Service to create a Government as a Service (GaaS) platform.

The federal government should lead the charge in researching and developing a suite of open source state management tools that are free to use and expand upon. This would create a cooperative IT community where states can add to these systems based on their unique circumstances and make those improvements available to others. It also greatly reduces the attack vector of potential hackers, since these handful of common systems can be more efficiently hardened than all of the unique systems built in each state. Hell, even private businesses would be free to use or contribute to any of the tools that overlap with their needs.

This doesn't even have to be done with changing laws, as far as I'm aware (though I'm no lawyer). The US Digital Service could be instructed to coordinate or build these tools through an executive order. New laws enabling this kind of digital transformation would further accelerate the quality of these shared tools, especially when it comes to allocating funds towards making the systems private and secure. And with the USDS leading the way, these systems would be fantastic. The USDS is already working on a common set of tools to standardize federal websites to create a unified user experience. They would be in the perfect position to help states take advantage of the tools already built and create even more quality tech to support state governance.

Obviously, some people will have concerns with such coordination. I imagine some folks are happy that there's no federal coordination of IT strategy in order to protect against some sort of centralized government technology takeover. But to mitigate those fears, these tools would be open source and voluntary to use. In addition, information privacy should be a major concern when creating all of these new systems. The latest encryption methods should be employed with no backdoors, and independent audits should be performed to keep everyone using these systems safe from bad actors, both internal and external.

Imagine the time and money saved if all US states coordinated in building an open source suite of government management tools.

Your next trip to the DMV could take minutes, no matter what state you live in. You could easily find and look through an interactive breakdown of your city's finances. You no longer would have to pay some company to file your state and federal taxes. You city's administration budget could be slashed, all while getting a more-responsive government.

And best of all, you could finally sleep soundly at night knowing your fellow citizens in other states are getting just as excellent an experience interacting with their government online as you are.

Podcasting's Walled Garden Problem

If you know me well, you know I'm a tad bit into podcasts. I listen to 28 different shows regularly, with 40 other shows I pick and choose from when I have the time. If I'm not listening to an audiobook, chances are I'm devouring a podcast.

I've been in love with Podcasts since I discovered them over a decade ago. It's basically internet radio, except you're the DJ. Distributed through the ubiquitous RSS feed technology, they're easy to find, share, and consume.

But Spotify (and some other media organizations) are intent on changing that.

When Spotify acquired Gimlet in 2019, I felt a change in the wind. Despite saying they'd keep existing podcasts available outside of Spotify, I knew it was just a matter of time before that promise was broken.

And here we are now, in 2021. Two of my favorite shows, How to Save a Planet and Science Vs have both become Spotify exclusives.

The hosts made many announcements leading up to their show's move to Spotify, making it clear that you could still listen for "free", as long as you did it on Spotify.

Now two excellent scientific journalism podcasts are locked away behind a Spotify account, unavailable to those of us who refuse to have two different apps for podcasting or don't want to move all their podcasts over to Spotify. I'm particularly disappointed in How to Save a Planet, since it was the one show that helped partially reduce my climate anxiety. They covered all the great work being done to alleviate the worst aspects of climate change, and it was a legitimate bright spot in my week to hear about new technologies that might save the world.

All of this wouldn't be a particularly annoying problem if Spotify's app actually worked well for podcasts. There's no way to add custom feeds, which is a must-have for people like me who support my favorite podcasters on Patreon and have private RSS links that provide access to bonus content. To listen on Spotify, I'd have to maintain podcast lists on two different apps for no good reason.

And once you try to listen to a podcast on Spotify, you quickly realize it's a horrific experience. Podcasting is an afterthought for the developers of Spotify. They only recently added speed controls after years of having podcasts available, and managing the podcasts you follow and which episodes you want to listen to is an unintuitive experience.

Nobody would choose Spotify as their podcast listening app of choice, so Spotify has decided to acquire great shows and force fans to use their application in an attempt to fully capture the revenue stream for those shows.

Once you throw money into the equation, this all makes perfect sense. If a podcast is only available on Spotify (even if it's free), Spotify will receive all ad revenue for the shows since it can use its existing ad placement technology that was developed on the music side of the business. They want to control all aspects of the show in order to maximize their profit. You have to have a Spotify account to listen to the podcast, which makes it that much easier to turn a listener into a paying Spotify user.

Someone at Spotify must have run the numbers and shown that putting its shows in their walled garden and losing listeners is still more profitable than having it widely available. It's a downright shame, since many of the Gimlet shows they acquired are incredibly informative and contain information that will make this world a better place.

I fully expect this trend to continue, and probably accelerate. That's why I'm a huge proponent of paying for your favorite shows through sites like Patreon. Directly supporting artists with small monthly contributions reduces their dependence on ads and helps keep them independent.

If you have a favorite show, please consider regularly supporting them using whichever method they prefer. The consolidation of podcast networks and ownership will continue to create these walled gardens, leading to wonderful content being hidden from millions of listeners.

It's up to passionate listeners to support these artists enough that they don't have to sell their souls to the giant corporations just looking to milk them for ad revenue. Please do your part and keep the information flowing freely, just as it was intended to do.