Lane Sawyer🌹

Just trying to leave the world a little better than I found it.

When Are We Going to Do Something?

I said I'm not writing about this again, but I will continue chaining together my periodic posts about gun violence in the United States every time something particularly egregious happens again.

We just had the racist shooting in Buffalo. Now we've got the senseless Uvalde, Texas shooting.

At this point I've given up hope that we'll do anything regarding gun control. There is so much we could do without even coming close to running afoul of the 2nd Ammendment, but we don't because our legislative branch has been broken for decades.

But I've already written too many words. I said I wasn't going to write about this again. Please reference my past work. The points I make there still stand.

And fucking call and email your Senators. Senseless death due to lackadaisical regulation of firearms shouldn't be a partisan issue.

SEATTLE SOUNDERS ARE CONCACAF CHAMPIONS

We won the CONCACAF Champions League title tonight! It was the best soccer match I've ever attended. We set the CONCACAF Champion League attendance record with 68k+ people screaming out hearts out as we scored each of our three goals to win the championship!

Next up for the Sounders is the 2023 FIFA Club World Cup against some of the best clubs across the world.

And now that our CCL run is over we can get back to focusing on MLS play.

What. A. Game.

What. A. Team.

What. A. City.

I love Seattle.

Hacking Legacy Sites for Fun and (Non)profit

Audience

This post is written for an audience of software engineers and assumes general Internet experience. Some definitions are provided below to provide context for those without a background in developing software.

Definitions

  • GDPR (General Data Protection Regulation): A European Union law focusing on data protection and privacy. California has a similar one called the CCPA (California Consumer Privacy Act). There is no federal law in the USA providing data privacy protection.
  • Cookie banner: Those annoying cookie notifications you get on every new site you visit asking you to choose how closely you want the website to track your behavior.
  • Google Analtyics: Google's analytical platform for tracking user behavior. Used by a mind-boggling number of sites.
  • API (Application Programming Interface): Enables applications to exchange data with each other using a documented interface. A major revolution in computer science that enabled the software industry to grow so quickly.
  • JSON (JavaScript Object Notation): A standardized format for representing a JavaScript data as human-readable text.
  • regex (Regular Expression): An esoteric way of searching through text using patterns. For example, this regular expression was written by Satan himself to match email addresses: (?:[a-z0-9!#$%&'+/=?_`{|}~-]+(?:.[a-z0-9!#$%&'+/=?_`{|}~-]+)|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\[\x01-\x09\x0b\x0c\x0e-\x7f])")@(?:(?:a-z0-9?.)+a-z0-9?|[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?).){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\[\x01-\x09\x0b\x0c\x0e-\x7f])+)])

Recently at work I had to fix a few legacy websites with broken cookie banners after we did a major GDPR compliance effort across all the publicly accessible websites. These sites were initially created 14 years ago and haven't been updated for many years. It's a technological wonder that they're still up and running, but they're still there!

Unfortunately, their old age makes delivering updates difficult. And thanks to some technology choices that broke the modern cookie banner code, there were some updates that needed delivering.

Thankfully, those sites already had Google Analytics. Besides being able to track your every move on a website, Google Analytics has the handy feature of remotely delivering code snippets! That's actually how the cookie banner software is delivered to these old sites in the first place. So instead of trying to figure out how to resurrect extremely old deployment infrastructure, I decided to first try to hack together a solution to fix the broken cookie banner software and patch the website via Google Analytics.

That effort turned into the hackiest code I've ever written. It's ugly, nonsensical without the context of the problem at hand, and uses browser APIs I hardly knew existed.

But it works!

And that was the key point. We have no plans to actively return to those legacy sites and provide new updates. All that mattered is we were compliant with GDPR. Were we actively maintaining those sites or had major rework for them on the horizon, I wouldn't have turned to my hacky solution. I showed what I wrote to a couple of good friends and they were rightly horrified at what I had done.

But again, it works!

So let's take a look at the code.

First up, I added a forEach method to the JavaScript String prototype.

...

Yeah. It's that bad.

The good news is since forEach on a String makes no sense, the site doesn't already try to do that somewhere, so there are no conflicts!

But when we look at the actual implementation, it gets worse.

Theoretically, in a sane world, forEach on a String might be a method that loops through each character in a string and lets you do something with it. That would make a bit of sense and can already be done in JavaScript, just not using forEach.

But that's not what I did. I discovered that the cookie banner broke because we had a String instead of a JSON object. But Strings can be turned into JSON!

"So", I thought, "what if I turn the String into the JSON object the code expected, then do the forEach stuff that was supposed to happen anyway on my newly created object!"

Turns out, that actually worked 🤣

String.prototype.forEach = function(originalForEachFunction) {
    var stringToJSON = JSON.parse(this);
    stringTOJSON.forEach(originalForEachFunction);
}

However, the journey wasn't over. While that fixed the error I was seeing and got the cookie banner to appear, I noticed there was an error when accepting any cookies! Apparently, the cookie banner would make a server call to record what preferences were selected.

I dug into the code and discovered that the network call was failing because the same String I turned into a JSON object earlier was still a string later when it should be an object! That's because the code above didn't actually modify the string at all.

At this point I thought I hit an impasse. There was no obvious way for me to insert myself into the code like I did earlier with my vomit-inducing String.forEach hack.

I let my brain stew on it for a while. That evening, I listened to a new episode of Darknet Diaries, a phenomenal podcast that tells stories about the darkside of the internet, mainly focusing on hackers and computer security. It's one of my favorite podcasts, and it reminded me that I should think like a hacker regarding my cookie banner program.

And what would a hacker do?

Intercept every single network call, look for the data they're interested in, and modify it as needed!

While I typically don't work during the evenings, this problem and idea was burning a hole in my head and I had to try it out immediately.

So there I sat, the faint glow of the computer lighting up my face in the dark room, digging up browser API documentation on how to peek at every network call being made. That night of hacking led me to create this monstrosity, which involves XMLHttpRequest, regex replacements, and lots of null checks (I modified the code to simplify what's going on and to provide some minor obfuscation, so imagine something even worse):

var originalSend = XMLHttpRequest.prototype.send;

XMLHttpRequest.prototype.send = function(data) {
    if (data && data.brokenFieldName) {
        data.brokenField = data.brokenField.replace(/\\\"/g, '"').replace(/\"\[/g, '[').replace(/\]\"/g, ']');
        data.brokenField = JSON.parse(data.brokenField);
    }

    originalSend.call(this, data);
}

It's horrible and I hate that my brain came up with it, but it works!

The best part about it is that I never would've been able to come up with such a bonkers idea earlier in my career. I'm at a point where I feel extremely comfortable with web development technologies, meaning I now understand what is available to me and how I can bend the rules. That kind of mastery feels incredibly good once you're there and the feeling of getting something working in a non-traditional manner is the heart of the hacker spirit. Makes me think I would've had a solid career as a white hat hacker in another life!

Anyway, I hope you hated that code as much as I did. The hack has been humming away in production for a few weeks now and works flawlessly.

And before you ask, yes, I heavily documented what is going on with the hack in several places so that people won't be confused when they find my monster a few years down the road.

Until the next hack,

/Lane

What Should We Expect From FOSS?

Audience

This post is written for an audience of software engineers and assumes general Internet experience. Some definitions are provided below to provide context for those without a background in developing software.

Definitions

  • Free and Open Source Software (FOSS): Software with published source code that anyone is free to use, study, or modify
  • JavaScript: The world's most popular programming language
  • Node Package Manager (NPM): An online collection of JavaScript code and associated set of tools that software developers use to share their work with others
  • Package: A bundle of code that can have different versions, allowing for software to be updated over time without forcing code using it to immediately upgrade
  • Protestware: A portmanteau of protest and malware, with malware being a portmanteau of malicious software
  • Software License: A document associated with a software project explaining how other developers can use, modify, or share the code

Yesterday, a new vulnerability was reported in the National Institute of Standards and Technology's National Vulnerability Database regarding some "protestware" that was added to a popular JavaScript NPM package that gets about 1 million downloads a week.

The owner of the node-ipc package updated the code to add a 1 in 4 chance of deleting the contents of all the files on your computer and replacing them with the ❤️ emoji if you had an IP address that came from Russia or Belarus. This affected versions 10.1.1 to 10.1.3, meaning a patch version inappropriately delivered this breaking change.

Later, the owner of the package removed this behavior and correctly published a different form of protest as a new major version (11.0.0) that uses the peacenotwar "protestware" package (which was also written by him). Using node-ipc will now put an anti-war message in a text file on the user's desktop, instead of modifying existing files on the user's system. This happens for all users, not just those with IP addresses from Russia or Belarus.

While the more malicious ❤️ emoji update was not available for very long, it still effected many projects and people, including popular ones like the Vue CLI, a developer tool to facilitate building websites. One person even claimed to be part of an NGO that lost thousands of files they were collecting to document Russian war crimes.

This whole thing has caused a bit of an uproar in the online developer community. People are flooding the node-ipc and peacenotwar repositories with issues calling the developer a Nazi or expressing disappointment because the protestware will damage the reputation and trust of open source software. And even more people are watching the deluge of comments with interest, since this is not the first time a developer has updated a popular NPM package to send a message to the broader software development community.

As a software engineer myself, I fall into that last group of interested spectators. All this has been fascinating to watch and has led me to closely examine my beliefs about what it means to use and develop Free and Open Source Software (FOSS) and how I can prevent something like this most recent NPM issue from affecting my team.

So with that context, let's dive into the actual article: What should we expect from FOSS?

Software Licensing

First, let's start by looking at how software licensing works in the open source community, and whether this particular protestware broke the terms of its license.

The license for node-ipc is the popular and flexible MIT license, which offers the software "as is", to be used however the user wants. peacenotwar is licensed under the stricter GPL-3.0 license, which requires any modifications to be published under the same license and the source code be made available.

While I'm not a lawyer, my understanding is that both licences absolve the developer of any liability for issues that arise from using it. This is common in licenses often chosen by open source software, so it's not surprising to see them in this case. But many of the people upset about node-ipc seem to not understand that downloading software from a random person on the internet comes with no guarantees, especially given the MIT and GPL-3.0 licenses attached.

From my perspective and experience, node-ipc and peacenotwar are following the terms of their license, even while providing undesired functionality in an updated version of the node-ipc package.

What can this tell us about open source software?

To put it harshly: you get what you paid for and this software was free.

Open source is about making sure the source code is easily accessible. It has nothing to do with quality. For every amazing piece of open source software, there are hundreds of awful ones. I should know, since I've written some of the useless ones! All one has to do is look at the GitHub profile of a random developer and you'll stumble across a pile of code that is technically open source, but it is not (and never will be) worth using.

The lesson here is: understand that open source software licenses promise you nothing, other than that their source code will be publicly available for examination.

Versioning

So if there isn't an open source license that protects the user from malicious code updates, what could prevent open source software from delivering malware?

Versioning. Theoretically.

In an ideal world, every update to software would be closely vetted by a team of experts who verified it behaved correctly before being published for the world to use. In that perfect parallel universe, even if a malicious update got past the expert team nobody would download that update before checking it themselves and it would never be set to update to an unchecked version automatically.

Alas, we do not live in such a paradise.

NPM uses Semantic Versioning, which is a widely used standard for labeling new versions of software. But it's just a convention, so there is nothing preventing a developer from breaking the rules when creating new versions. That's what happened with node-ipc, since it introduced the file-destroying protestware as a "patch" update. Patches are used for non-breaking changes like fixing bugs or make updates that do not break anything for the end user.

Clearly, wiping files on the computer is a breaking change, so the owner of node-ipc broke the versioning "contract".

Software development relies on an incredible amount of trust. When you use someone else's software, they often have used some other person's software to create it. This leads to a long chain of dependencies, meaning your website to share pictures of cute animals was ultimately created by the work of hundreds or thousands of people. That trust and sharing of quality software is a major part of why there's been incredible growth in the tools available to software engineers and the resulting applications being produced.

But it does have its downside, which was clearly on display with the node-ipc update.

That trust is exploited by the default behavior of NPM when adding new software dependencies. NPM uses the compatible with version by default when determining dependencies, which will apply new patch versions for packages automatically when running a very common NPM command (npm install). While this can be helpful for quickly distributing software updates like bug fixes or performance improvements, it should not be the default precisely because people can abuse Semantic Versioning.

Because of the default behavior of a widely used tool, any developers that did not take the extra time to lock their package versions could have woken up a few days ago to a hard drive full of ❤️ emojis.

Engineers should take the time to understand the tools they are using and how software versioning behaviors could impact their code, but the reality is that most don't. Take me for example. I didn't completely understand how versioning worked in NPM earlier in my career even though I had been using it for years and I'm the kind of person who enjoys reading software documentation for fun! Many software engineers face tight deadlines. Unfortunately, things like dependency analysis and reviews don't happen for a good portion of newly written software.

Looking back at node-ipc's versioning, there is now a version 11.0.1, which is a new major version that prominently states that the tool now contains the peacenotwar package, which is far less malicious than the original protestware. This is versioning done properly. While the new version still delivers unwanted functionality, at least node-ipc is now following versioning standards when making noteworthy changes.

The lesson here is: lock your dependencies and review any software upgrades closely. Open source software does not guarantee that there will be working software or proper versioning. The whole point is to be open and free to everyone and that includes incompetent or malicious actors. You really should vet any new code you did not write yourself before using it.

Is Protestware A Good Way to Protest?

Part of why I wanted to write an article examining this incident and how it relates to expectations in open source is because of the word "protestware". That's a new term I hadn't stumbled across before, and it seems like it's new to most of the wider development community as well.

The situation between Russia and Ukraine is incredibly hard to watch, and I feel deeply for the people of Ukraine who are being unjustly invaded by an autocrat trying to leave his mark on the world. I've got a tinge of fear because I live in Seattle, which could become a target if Putin decides to whip out the nukes. When I saw that a decently popular package on NPM decided to create some havoc for Russian users, I initially chuckled and thought that was a clever way to make a statement. The idea of protestware inherently appeals to me. Especially when used for a cause that I believe is morally just!

I imagined some Russian hacker following Putin's orders to hack a US power plant waking up one day to nothing but ❤️ emojis, ruining his whole day and screwing up his spy work. That's an incredibly satisfying image. I'm having another laugh imagining it just now.

But that's not the reality of the situation.

Internet attacks know no borders. It's entirely possible that some grandma living in Canada got hit because her ISP just bought some IP addresses that used to be located in Russia. Or (if that NGO claim I mentioned earlier is true) some desperate Ukrainian's reporting of a war crime is lost forever because they died from a bomb the next day. Or a Russian anti-war activist loses a valuable spreadsheet containing the contact information for a nationwide network of activists. Or an MIT software engineering student is using a VPN to watch some Russian soccer games and runs the protestware, losing his entire dissertation.

There's so many ways the initial node-ipc protestware could've hurt innocent people.

Which puts me in an interesting position regarding how I feel about it.

Governments have imposed economic sanctions on Russia. Companies have pulled their business. The global banking system kicked Russia out of SWIFT.

All of those actions hurt innocent people too, but I largely agree with what's being done to dissuade Putin from continuing his invasion. While economic sanctions will hurt Russians who bear no responsibility for what's going on, they are less damaging than a full-on war.

So why can't an individual make a similar choice to attempt to inflict non-physical damage on Russia?

I lean towards supporting the idea of protestware in general, and tolerating this particular situation. The developer screwed up by introducing the file-modifying change as a patch version instead of a major one and not disclosing the change. That broke the social contract for delivering open source software and will damage his credibility going forward. But philosophically he has free-reign to do whatever he wants with the open source software he created, so it's hard to completely condemn him for trying to do his small part in protesting the Russian invasion of Ukraine using the skills he has at hand. It's something that could have caused real damage, though we'll likely never know the true extent. I wouldn't condone this particular functionality change, since I think there are less-damaging ways to get the same message across.

The updated version that leaves an anti-war message on a user's machine is a much easier call for me.

I think it's a brilliant way for a software engineer to make themselves heard. But there is no doubt that it would be incredibly annoying for those using that software. That is, after all, a major point of protests. They don't work if nobody notices!

However, were I using the node-ipc project I would have lost respect for the developer and the entire project because of the protestware. I get why people are incredibly upset enough to the point of spamming the node-ipc repository with angry and hateful issues directed towards the developer, even if I think many of the messages go too far and constitutes online harassment. I don't envy him trying to clean it all up and move on from this either.

Overall, I'm going to lean on what seven years of consulting taught me. The answer is: "it depends". There is a proper place for protestware. Software is a form of speech, so I think it should be protected to a reasonable degree, which includes forms of protest. Just as there are bad and good ways to hold an in-person protest, that holds true for doing it in the form of software. That line will no doubt be difficult to walk, as it is for any protest.

What Should We Expect From FOSS?

By this point, I hope I've convinced you that open source software is a grab bag that promises you nothing and everything all at once.

I love software engineering precisely because of open source. I know of nothing like it in human history. Millions of hours have been dedicated to creating software that is given away for free, to be remixed and built upon. That has led to some incredible leaps in digital technology over a few short decades. FOSS, as a concept, is a technological marvel that should be up there in importance next to the discovery of fire and agriculture. It has the potential to radically transform the world. For good, or bad. Just like any powerful technology.

But those lofty expectations should come with a dose of reality. As we saw with node-ipc, there's danger in blindly accepting open source software from other people before reviewing it yourself. The problem comes from making that a reality. Software engineers use so much software that it would be practically impossible for every developer to understand every dependency change.

It would be great for tools like NPM to make changes that prevent malicious or undesired updates from occurring in the first place. That's something we can push for in the open source community. Software engineers never met a problem that couldn't be solved with more software! 😂

Until we get immaculate tools that save us from ourselves, here are some specific actions that can be taken to secure our projects from being impacted by this kind of protestware in the future:

  • Get your software from respectable institutions that have a track record of releasing quality code.
  • Lock your dependencies so that you are only ever making a conscious decision to upgrade.
  • Review release notes for any new code you are including in your software.
  • Contribute to open source software by writing good code or reviewing the code of others to make sure it's working as expected.
  • Write your own code where possible. While you don't want to reinvent the wheel, be deliberate about what software you are using.
  • Learn about the tools you use and how they work. Don't forget to think about potential attack vectors!

In conclusion, we're probably going to see a lot more protestware in the future as software continues to be an ever larger part of our lives. The node-ipc issues remind us all that open source software offers no guarantees. While FOSS is amazing, its downsides should be recognized and considered when choosing to use new open source software. Security teams need to become more commonplace in the industry, and better ways of establishing and maintaining trust for FOSS developers and users would make it easier to sleep at night when updating your dependencies.

Ultimately, it's up to software engineers to protect their systems from malicious actors. To do so means understanding where FOSS code comes from and using that knowledge to set realistic expectations for what open source software can do for us.

Pandemic Life: Year Two

Year two is over!

I figured I would write a follow up to last year's post about what it's been like to live in a pandemic. I was desperately hoping there would be no need for a second one because the pandemic was over, but here we are.

Thankfully, the naive optimism of my first year post largely worked out despite the pandemic entering its second year of changing the world. I was fully vaccinated in May and got my booster in December. Thanks to that I was able to see friends and family way more often than in 2020.

While there was still considerable risk in 2021, it wasn't as terrifying to go out, especially when the people I was with were vaccinated. As far as I can tell I didn't get COVID despite having done all of the following with friends and family:

  • Regularly rock climbing at the gym (I climbed my first V4!)

  • Snowboarding trips to Snoqualmie

  • Camping trips to the Olympic Peninsula

  • Spending over a month in Utah, with frequent family events

  • Dining in at restaurants

  • Regularly attending D&D and board game nights

Even with that fairly busy list (as least for an introvert like me), I only ever had some slight sniffles and aches once or twice over last year but I never tested positive for COVID. Seems like the vaccine worked pretty well!

Just in the last few days my county has removed its mask mandate. Feels incredibly weird walking through my apartment building without a mask, and I'm still wearing it when I go to somewhat crowded places. But it seems like we're heading in the right direction. The worst of this is (hopefully) over.

When does this thing officially become an endemic? That change would be nice. I know other parts of the world are not doing as well as my neck of the woods though, and I hope they can get all the resources they need to finally wrangle COVID down to endemic status.

But despite all the ups and downs, crazy news stories, a budding war in Ukraine, and countless other awful things that happened around me in 2021, this year of the pandemic was definitely better for me than the first.

I'm really hoping I don't have to write another one of these next year. 🤞

Static Code Analysis: Reducing Your Team’s Cognitive Burden

Have you ever run into a pull request that seemed impossible to merge? One with hundreds of comments from a dozen people, with two folks passionately arguing about choosing variable names, which language features to use, or whether to delete that unused method that might get used someday. Nobody can seem to agree on a set of standards, and with no ultimate authority to turn to, the code review devolves into a contest of wills.

Those pull requests from hell result in a lot of wasted time for a software engineering team. Don't you wish you could harness that extra time and funnel it back into building a quality product?

That’s where static code analysis comes to save the day!

Static code analysis is the process of analyzing source code against a standard set of rules. These rules vary based on programming language, business domain, and team preferences, but practically every major programming language has a decent static analysis tool that can be added into your team’s regular workflow.

Static code analysis can be accomplished with a variety of tools and methods. This article is going to talk about just two of them: types and linting. If you don't have either added to your team's workflow, those two are a great place to start.

Types

Programming languages can generally be separated into two camps: those with strong types and those with weak ones.

Strong types include languages like C++, C#, and Rust. Weak types can be found in languages like Python and JavaScript.

In general, types are a way of structuring the data in your code and are checked at compile time. This means bugs related to the type of data you're manipulating are caught up front, as part of the development process. A weakly typed language leads to bugs that happen at runtime, which can lead to a bad user experience or errors in production environments.

Some weakly typed languages have ways of adding in types, so don't despair if your team is already using a weakly typed language. TypeScript is a great example that extends JavaScript to include types. If your tech stack has a way of using types, you should absolutely be using them!

Some programmers, especially those who have never used types, can be hesitant to add them to their codebases. It's one extra thing to learn, and when you switch from being able to run your code immediate to having a compiler yell at you before you can even run the code, the experience can be a bit jarring.

But it's totally worth the upfront cost.

Let's look at a simple example of fetching data from an API in JavaScript:

function fetchData(id) {
    return fetch(`https://my-api.com/data/${id}`);
}

function doSomething(id) {
    const data = fetchData(id);

    // what can we do with data?
}

Do you have any idea what sort of data you'll be getting from the server? Even if you remember right now, will you be able to answer correctly a year after writing the code? Our brains are not perfect records of everything we've done, so at some point you'd have to look at the documentation (if there even is any) or hit some breakpoints while running the code to figure it out.

But sprinkle some TypeScript in there and life gets so much better:

interface MyApiResult {
    id: number,
    name: String,
    address: String,
    city: String,
    zipCode: String,
}

function fetchData(id: number): MyApiResult {
    return fetch(`https://my-api.com/data/${id}`);
}

function doSomething(id: number) {
    const data = fetchData(id);

    // We can easily use anything listed in the MyApiResult interface!
    console.log(`Hello ${data.name}. How is ${city} these days?`);
}

Now we can immediately see that fetchData will return some basic user information. While this example is a bit contrived, having a whole team working on a codebase and not being able to immediately see what fetchData does results in a bunch of wasted time looking at documentation or manually running the project and triggering the workflow that runs the code.

Types are the most important type of static analysis, especially as team size grows. Programming is all about manipulating data in a computer, so why shoot yourself in the foot by writing code that ignores what that data looks like?

Save your team brainpower for problems more important than the shape of your data and get yourself a language with a type system!

Linting

The other major piece of static code analysis worth adding to your team's workflow is a linter. Linting is the process of analyzing code for bugs, performance issues, proper use of language features, and stylistic choices to ensure code consistency.

Most modern languages have some sort of linting system. Some are built into the language, like Rust's cargo clippy command, while others arise from community efforts, like JavaScript's eslint.

However, initially setting up a linter can be difficult to do on a team. Remember those arguments about code style or the proper language features to use in PRs? A linter codifies that into a standard set of rules that everyone's code can be checked against. So the team will have to agree on what those rules should be and then the computer can enforce compliance with every new addition to the codebase.

The biggest gain from a linter is consistency. Even if you don't like particular linter rules, your team doesn't have to argue about what the code looks like during every pull request. A good team is full of people who will value consistency over the "perfect" linter configuration, so you should strive to pick sensible defaults that everyone can live with. Using a popular configuration is one way of quieting even the noisiest developer, since a configuration that's good enough for hundreds of thousands of other people will be good enough for your team.

Once a linter is installed, make sure it runs automatically and that you have gates in place to not merge any new code until the linter is happy. Without a hard blocker, linter errors can and will seep into your code over time, eventually leaving you with thousands of errors or warning that end up getting ignored by the team instead of addressed. This leads to code rot, performance issues, and a generally unpleasant developer experience when you're faced with a wall of doom anytime you see the linter run.

Conclusion

Programming is a creative endeavor, and human brains only have so much capacity each day. By eliminating thought from entire classes of issues, your team will be free to focus on the things that truly matter: solving problems that users of your system face.

A strong type system and sensible linting rules are two great ways to reduce your team's cognitive burden, allowing you to get more done with less time. Automation is the name of the game in software engineering, and having a computer check code against a set of rules is the perfect use of CPU cycles.

Don't spend your precious time arguing over pointless semantics. Use static code analysis tools.


This is the fifth of nine articles delving into the processes that every effective development team should use. Stay tuned for more!

Book Review: This Is How You Lose the Time War

Go read it.

This Is How You Lost the Time War is one of the most beautifully written pieces of fiction I've ever read. I even read parts of it out loud because the words were that delicious.

I don't read out loud.

Ever.

I loved this book too much to write a detailed review. I'm still reeling from the experience and I can't wait to read it again.

In short, it's a love story scattered through time and space, giving you a peek into the worlds of two intergalactic time soldiers while leaving a tantalizing universe hidden between the lines on every page. It's a intimate tale of godlike spies who find themselves having more in common with each other than their own communities and how they hide their budding relationship from their own hivemind-like transhumanist(?)/alien(?) collectives.

I'm at a loss for words because nothing I write will ever be as gorgeous as the poetry within its pages.

This Is How You Lose the Time War is a lovingly crafted puzzle-box of a novel that deserves a place on your shelf.

Go read it.

Yew Hooks with GraphQL

Over the last year or so I've been occasionally hacking away at a web app called Dicebag, which will eventually become a collection of useful tools to facilitate in-person Dungeons & Dragons games.

Part of this project stems from my lack of satisfaction with other tools I've found. Most tend to focus on running a game online or preparing for games in advance. I'm wanting something that enhances the player and DM experience by presenting contextual data depending on what's happening in the game, keeping players off their phones and engaged in the story.

I'm a React developer by trade but a Rustacean at heart, so I decided to write it using the Yew framework, one of the more popular Rust web frameworks. It's been really fun so far! The app is ugly and non-functional except for a janky initiative tracker I just put in place, and even that is far from polished.

Regardless of the messy code and unpolished UI/UX, it felt great to put together a useful, generic custom hook for making GraphQL requests using Yew and the Rust graphql-client crate.

This post is a short walk-through on the anatomy of my custom GraphQL hook and ways I'd further like to improve it.

So, let's take a look at the hook! The code below is heavily annotated with comments I've added for the purposes of this blog post to explain Rust concepts, the libraries I'm using, or things I'm particularly happy with!

First up, the example GraphQL query we'll be working with:

# Query to fetch a campaign by ID. If none are provided, return all campaigns
query CampaignsQuery($campaign_id: Int) {
    campaigns(id: $campaign_id) {
        id
        name
        description
    }
}

Now an example usage of the use_query hook:

// Example usage of the campaigns query within a Yew functional component

#[function_component(CampaignsPage)]
pub fn campaigns_page() -> Html {
    let variables = campaigns_query::Variables { campaign_id: 1 };

    // I'm particularly happy with the user experience on this hook.
    // All you have to do is choose the query you want to make by specifying
    // the generic parameter's struct and pass in the variables for that query.
    // Can't get much simpler than that!
    let query = use_query::<CampaignsQuery>(variables);

    // ... use the query results to display campaign #1
}

And finally, the hook code itself:

// The code for the use_query hook

// `graphql-client` crate builds all the types for you just by looking at the
// GraphQL server schema (which is auto-generated with a CLI command)
// and the query you wrote (which was the first code block in this post)
#[derive(GraphQLQuery)]
#[graphql(
    schema_path = "src/graphql/schema.json",
    query_path = "src/graphql/queries.graphql",
    response_derives = "Clone"
)]
pub struct CampaignsQuery;

#[derive(Clone)]
pub struct QueryResponse<T> {
    pub data: Option<T>,
    pub error: Option<String>,
}

// The query itself! There are three trait bounds, all related to the
// graphql-client crate types. The `Clone` and `'static` bits are needed
// to fulfill the lifetime requirements of the data here, since this is
// going to be used with in the context of a Yew functional component
pub fn use_query<Q>(variables: Q::Variables) -> QueryResponse<Q::ResponseData>
where
    Q: GraphQLQuery, // GraphQLQuery is the trait provided by the graphql-client crate
    Q::Variables: 'static, // That trait also provides a way to specify the variables
    Q::ResponseData: Clone + 'static, // And the type you expect to get back
{
    // Local state to keep track of the API request, used to eventually
    // return the results to the user
    let state = use_state(|| QueryResponse {
        data: None,
        error: None,
    });

    // Now we get to the part of Yew that isn't so nice. I've got to clone 
    // the state so I can move it into an asynchronous thread, since Yew hooks
    // can't do async without spinning up a local thread
    let effect_state = state.clone();

    // This works identically to React's `useEffect` function
    use_effect_with_deps(
        move |_| {
            // As stated earlier, we spin up a thread in order to use
            // the asynchronous API call code
            spawn_local(async move {
                // `build_query` is another nicety provided by the GraphQLQuery type
                let request_body = Q::build_query(variables);
                let request_json = &json!(request_body);
                // reqwest is a nice Rust http client
                let request = reqwest::Client::new()
                    .post("http://my-server.domain.com")
                    .json(request_json)
                    .send()
                    .await;
                // Set the data or errors as the results dictate
                match request {
                    Ok(response) => {
                        // Turn the response JSON into the expected types
                        let json = response.json::<Response<Q::ResponseData>>().await;
                        match json {
                            Ok(response) => effect_state.set(QueryResponse {
                               data: response.data,
                               error: None,
                            }),
                            Err(error) => effect_state.set(QueryResponse {
                                data: None,
                                error: Some(error.to_string()),
                            }),
                        }
                    }
                    Err(error) => effect_state.set(QueryResponse {
                        data: None,
                        error: Some(error.to_string()),
                    }),
                }
            });

            // The "cleanup" function, just like in React's `useEffect`
            // Since there's nothing to cleanup here, we write an empty function
            || ()
        },
        // The `useEffect` dependency here is `()`, the unit type, which is
        // equivalent to passing `[]` in React's `useEffect`
        (),
    );

    // Return the state's value to the user so they can use the API result!
    (*state).clone()
}

Isn't that cool? It has a simple API that I'm excited to use. Writing it felt similar to React with some pain points that come from Yew being a developing framework and the verbosity type system in Rust, but I'm quite enjoying the development process in this tech stack.

Writing the hook took me a few iterations to get the API right, since I'd never written much Rust code dealing with generics and trait bounds. In fact, as of time of this writing you can see at least one older version still in the codebase because I haven't migrated everything over to the new and improved one yet.

Initially I had my own Response and Query types with weird lifetimes that were annoying to write and use because I didn't understand that I could dig into the ResponseData type on the generic Q trait with the GraphQLQuery bound. Going through this exercise forced me to better understand lifetimes, Clone, and generics, so I'm happy I spent the time iterating on it.

Potential Improvements

loading Field

Some GraphQL hook libraries provide a loading field on the data structure so you can tell if you're still waiting on the API. I'm conflicted on adding this, since you can discover if the API has returned by checking if data or errors is a Some value.

But it's not hard to add and simplifies if statements for users of the hook so I'll probably add it in once start using the hook more heavily and feel that annoyance myself.

Improved Errors

Right now I'm just smashing the errors into a string. Ideally I'd return them in a structured manner, but I just haven't gotten to that yet.

Refreshing the Query

Given that the use_effect_with_deps has a () as its dependency, this query will only run on the first time the component using it renders.

Ideally I would have better control over when the query refreshes, especially in scenarios where you add something new and want the UI to update. It might be easier to just pair it with another hook that lets you refresh the whole component, or maybe it's a new parameter to the query.

Time will tell. I'm not nearly close enough to caring about that kind of thing in the Dicebag app yet!

Support For Any GraphQL Client

Right now it only works with the structs produced by the graphql-client crate. That's what I use in my project, but if I were to export this hook for general use it would be nice to switch up the types as needed. I'm not even sure I can make the hook that generic, but it would be a useful learning opportunity to stretch the bounds of generics until they break.

Conclusion

Yew's hooks are fun! Writing my own taught me a lot more about Yew as a framework, generics, trait bounds, lifetimes, Rcs, and more.

Yew is still developing as a framework, but I'm excited to see where it goes. It already rivals React and other top JS frameworks in terms of speed, and that's with a small volunteer community working on it. WASM has a bright future, and because of that, Yew has an opportunity to play a big part in the Rust web development space. I enjoy working with it so much that I'm hoping to contribute to the project myself. And if I'm lucky, maybe I'll even get paid to write Rust on the front-end someday!

If you have any feedback regarding the hook or this post, feel free to open an issue on my repository or reach out to me on the social media platforms on my About Me page!

2021 Year In Review

As the first year of the decade comes to a close, I can breath a sigh of relief. While 2021 wasn't great, at least this wasn't 2020.

Personally, I had a pretty solid year. As a country and global society, things could've gone much better.

Let's get the global bad out of the way first:

  • The January 6th Insurrection, which will be discussed as one of the lower points in US history for decades
  • Carbon emissions went back up after a slight lull from the pandemic
  • The pandemic remained a pandemic, even after an absurdly effective vaccine was quickly created
  • Seattle had an election that somehow resulted in us electing a Republican who joined the GOP after Trump took it over
  • Dramatic, unseasonable, and deadly weather events happened in every place I've ever lived, including the hottest day ever recorded in Seattle and the coldest day in the last 30 years
  • Breath of the Wild 2 was not released
  • Many other things that kept me awake at night that I've apparently successfully forgotten

But after a year like 2020, I've learned to manage my existential horror so while the world may be literally burning all around me, at least I can enjoy my life and hang on to the sliver of hope that we can turn this train around before we go completely off the rails.

And with that nasty list out of the way, it's time to be positive and do some navel gazing.

Every year since 2015 I've put together a "52 Things" list. I stopped publishing them publicly after a couple years, but I've continued the practice of setting 52 goals across some fairly consistent categories:

  • Personal
  • Health
  • Finances
  • Social
  • Experiences
  • Media
  • Work

In 2021 I finished all of my social goals, all but one of my media and financial goals, and a spattering of others for the remaining categories. All together, I finished 24 of the 52! That's not too bad, and is line with most other years (with the exception of 2020 because COVID blew up my ability to do a lot of things I had planned).

Some highlights from my goals this year are:

  • Finishing 77 books
  • Climbing two V4s at the bouldering gym
  • Making a new friend, even during a pandemic
  • Hitting all of my financial savings goals
  • Getting vaccinated

Outside of goals my year also went really well.

Work was a big part of why 2021 was a good year for me. I really enjoyed settling into my role as a software engineer building websites to display scientific data. I've never been this happy with work before, and I wish I would've gotten out of consulting way before 2020. I'm now getting paid my highest salary ever! It's weird that I got a pay increase for working in the non-profit industry... It's almost like my skill set wasn't valued and I wasn't compensated fairly as a consultant (but that's a post for another day). Working in the non-profit industry has been rewarding, and I have a great team of people I work with every day to build quality, useful software. Plus we have a great work/life balance culture of working hard but calling it quits at the end of the day. While I still don't get to program professionally with the Rust programming language, I do enjoy what I do and that makes life much better.

Outside of work I tried to keep myself as busy as a pandemic would allow. I'm the Dungeon Master for a D&D group with five of my friends, and we had a good six or seven sessions throughout the year. Building a world and seeing others explore it is satisfying, and I like to think that I'm getting better at running games so they stay fun for everyone.

In addition to sitting around a table telling stories, I climbed regularly with a group of friends and while my waistline is not in great shape, I'm climbing at the highest level of my entire life! My goal is to move up another difficulty level in 2022, which will require slimming down a bit so I'm excited for that.

I also reconnected with my ex-wife after three years of space and we got back together, to the immense delight of our dog Kaladin. While there's been some bumps in the road as we get to know each other again, it's going really well so far and being with them makes me happy!

Finally, I got to spend a lot of time with family. Between a two week vacation in the summer and a month at my parents house this winter, I've gotten a lot of face time with immediate and extended family. Living in Seattle means I don't see them as often, since most of my family is in Utah. I'm grateful my profession allows me the flexibility to work from just about anywhere, and long trips to see family will likely be something I do every year from now on!

So yeah, 2021 wasn't too bad. I've got a new batch of goals that I'm excited to work on, and I'm optimistic that I can get more than half finished this year!

Hope you all are as excited for the coming year as I am.

We've got this.

Government as a Service (GaaS): How the Federal Government Could Streamline State Management

Last week, the Missouri governor showed the world his technological illiteracy by vowing to prosecute a "hacker" that brought a major data leak to the government's attention. The entire tech community had a big laugh, since the government itself was sending Social Security Numbers to users that could be easily found with the barest modicum of tech know-how.

The governor's public blunder never should have happened. The fact that he publicly stated his ignorance in such an embarrassing manner demonstrates that nobody in his advisory circle knew enough about technology to tell him to stop. Nobody he knew understood that it was the government's mistake, even though the data breach was responsibly reported.

It's not a big leap to assume that nobody competent is leading Missouri's technology departments. I shudder to think what else in the state is wide-open for attackers.

Sure, it's easy to call out government incompetency (especially when it comes to technology). It's practically an American past time. But things like this keep happening and we should keep getting upset until the issue is solved.

Securing IT systems is no trivial task, and we make it incredibly difficult on ourselves due to the very structure of the US federal government system. States have an incredible amount of power, which means the United States has about 50 different ways of doing any one thing when it comes to running the state IT systems. That's a huge attack surface for malicious actors to find their way into.

But regardless of which state we're talking about they all need to do similar things that involve information technology. Here are just a few things I could think of off the top of my head:

  • Legislation
  • City planning
  • Taxes
  • Voting
  • DMV
  • Communications
  • Infrastructure maintenance
  • Citizen feedback
  • COVID reporting and notifying

I could keep going.

So why are we creating 50 different IT systems for these? As a small example, I live in Washington, which has a great legislation system that even allows citizens to provide feedback on bills. Looking at the same type of site from Texas, the last state I lived in, their legislation system leaves much to be desired, especially because there's no way to provide feedback on the very bills you're searching.

I'm sure both state's IT departments (or potentially hired contractors) put a lot of hours into these systems. It's great they're available, but sad that my friends in Texas don't have the same tools of democracy I have. And looking back at the utter incompetency of Missouri, many of these systems across the US were likely built on a shoestring budget by people who don't have an understanding of IT security.

All this leads me to ask: why aren't states working together to provide a great, secure technology experience for their citizens?

I argue that our federalist system discourages coordination, at least when it comes to IT systems.

One benefit to the federal system is that states get to be "laboratories of democracy". Each state can adapt their laws to its specific citizens, with a federal government theoretically providing a common floor of basic human rights that every state has to provide. Sometimes those "experiments" do leak over to other states, until things that used to be unthinkable (gay marriage or cannabis legalization) are essentially the law of the land, even without federal support. That can be a pretty great way to run a country, but it does have its pitfalls. One of which is the fragmentation of technology solutions, further exacerbating our already inefficient bureaucracy.

Maybe I'm just ignorant, but I haven't seen collaborative thinking when it comes to building and running the information technology powering our state, county, and local governments. Part of it is likely because the Internet and supporting technologies are relatively new and the machinery of government moves deliberately slow. Another part is that private industry sucks up the best IT talent just to put them to work on milking a few more dollars out of ad clicks instead of positively contributing to society. And yet another is because the one government body in place to facilitate coordination between states simply hasn't done it yet!

Now is the perfect time for the US Digital Service to create a Government as a Service (GaaS) platform.

The federal government should lead the charge in researching and developing a suite of open source state management tools that are free to use and expand upon. This would create a cooperative IT community where states can add to these systems based on their unique circumstances and make those improvements available to others. It also greatly reduces the attack vector of potential hackers, since these handful of common systems can be more efficiently hardened than all of the unique systems built in each state. Hell, even private businesses would be free to use or contribute to any of the tools that overlap with their needs.

This doesn't even have to be done with changing laws, as far as I'm aware (though I'm no lawyer). The US Digital Service could be instructed to coordinate or build these tools through an executive order. New laws enabling this kind of digital transformation would further accelerate the quality of these shared tools, especially when it comes to allocating funds towards making the systems private and secure. And with the USDS leading the way, these systems would be fantastic. The USDS is already working on a common set of tools to standardize federal websites to create a unified user experience. They would be in the perfect position to help states take advantage of the tools already built and create even more quality tech to support state governance.

Obviously, some people will have concerns with such coordination. I imagine some folks are happy that there's no federal coordination of IT strategy in order to protect against some sort of centralized government technology takeover. But to mitigate those fears, these tools would be open source and voluntary to use. In addition, information privacy should be a major concern when creating all of these new systems. The latest encryption methods should be employed with no backdoors, and independent audits should be performed to keep everyone using these systems safe from bad actors, both internal and external.

Imagine the time and money saved if all US states coordinated in building an open source suite of government management tools.

Your next trip to the DMV could take minutes, no matter what state you live in. You could easily find and look through an interactive breakdown of your city's finances. You no longer would have to pay some company to file your state and federal taxes. You city's administration budget could be slashed, all while getting a more-responsive government.

And best of all, you could finally sleep soundly at night knowing your fellow citizens in other states are getting just as excellent an experience interacting with their government online as you are.

Podcasting's Walled Garden Problem

If you know me well, you know I'm a tad bit into podcasts. I listen to 28 different shows regularly, with 40 other shows I pick and choose from when I have the time. If I'm not listening to an audiobook, chances are I'm devouring a podcast.

I've been in love with Podcasts since I discovered them over a decade ago. It's basically internet radio, except you're the DJ. Distributed through the ubiquitous RSS feed technology, they're easy to find, share, and consume.

But Spotify (and some other media organizations) are intent on changing that.

When Spotify acquired Gimlet in 2019, I felt a change in the wind. Despite saying they'd keep existing podcasts available outside of Spotify, I knew it was just a matter of time before that promise was broken.

And here we are now, in 2021. Two of my favorite shows, How to Save a Planet and Science Vs have both become Spotify exclusives.

The hosts made many announcements leading up to their show's move to Spotify, making it clear that you could still listen for "free", as long as you did it on Spotify.

Now two excellent scientific journalism podcasts are locked away behind a Spotify account, unavailable to those of us who refuse to have two different apps for podcasting or don't want to move all their podcasts over to Spotify. I'm particularly disappointed in How to Save a Planet, since it was the one show that helped partially reduce my climate anxiety. They covered all the great work being done to alleviate the worst aspects of climate change, and it was a legitimate bright spot in my week to hear about new technologies that might save the world.

All of this wouldn't be a particularly annoying problem if Spotify's app actually worked well for podcasts. There's no way to add custom feeds, which is a must-have for people like me who support my favorite podcasters on Patreon and have private RSS links that provide access to bonus content. To listen on Spotify, I'd have to maintain podcast lists on two different apps for no good reason.

And once you try to listen to a podcast on Spotify, you quickly realize it's a horrific experience. Podcasting is an afterthought for the developers of Spotify. They only recently added speed controls after years of having podcasts available, and managing the podcasts you follow and which episodes you want to listen to is an unintuitive experience.

Nobody would choose Spotify as their podcast listening app of choice, so Spotify has decided to acquire great shows and force fans to use their application in an attempt to fully capture the revenue stream for those shows.

Once you throw money into the equation, this all makes perfect sense. If a podcast is only available on Spotify (even if it's free), Spotify will receive all ad revenue for the shows since it can use its existing ad placement technology that was developed on the music side of the business. They want to control all aspects of the show in order to maximize their profit. You have to have a Spotify account to listen to the podcast, which makes it that much easier to turn a listener into a paying Spotify user.

Someone at Spotify must have run the numbers and shown that putting its shows in their walled garden and losing listeners is still more profitable than having it widely available. It's a downright shame, since many of the Gimlet shows they acquired are incredibly informative and contain information that will make this world a better place.

I fully expect this trend to continue, and probably accelerate. That's why I'm a huge proponent of paying for your favorite shows through sites like Patreon. Directly supporting artists with small monthly contributions reduces their dependence on ads and helps keep them independent.

If you have a favorite show, please consider regularly supporting them using whichever method they prefer. The consolidation of podcast networks and ownership will continue to create these walled gardens, leading to wonderful content being hidden from millions of listeners.

It's up to passionate listeners to support these artists enough that they don't have to sell their souls to the giant corporations just looking to milk them for ad revenue. Please do your part and keep the information flowing freely, just as it was intended to do.

Living with Seattle's Long Dark

It's that time of year again, where the sun sets before 7 PM and a perpetually gray blanket of clouds once again descends on the Emerald City.

The Long Dark in Seattle has begun.

As an introvert, fall and winter are two of my favorite seasons here in Seattle. The city slows down, social events become less frequent but more cozy, and I get to snuggle up in a blanket and read while listening to the rain drumming on the porch.

But as someone with depression, fall and winter can be the most difficult seasons of the year. At its worst, the Long Dark gives us a paltry 8.5 hours of sunlight. Add in the November reversion to standard time and my night owl sleeping habits... I'm lucky to see 6 of those hours some days.

Thankfully after living in Seattle for five years I've figured out a variety ways to cope with it. So far, these are a handful of things that have worked well:

  • Long lunchtime walks with my dog to soak in the sun
  • Bright lights indoors until 8 or 9 PM
  • Hot coffee to warm me up and wake me up
  • Regularly hitting the climbing gym
  • Hot pho and other soups
  • Movie nights with friends
  • Snowboarding adventures
  • Weekend hikes and camping trips
  • Hobbies like video games and programming that blast my eyes with light
  • Vacations down south, often in Utah to visit family
  • Accepting that my brain will be a little down on itself for a while

It took me a while to build up a good toolkit to fight back against the literal and figurative darkness. But now that it's in place, I've been weathering the worst of the winters well.

While the Long Dark may sometimes be tough, the rest of the year more than makes up for it.

I love this city!

The Why and How of Rust Declarative Macros

In order to prepare to conduct a technical interview of a potential future co-worker, I decided to try to solve the problem we would be presenting to the candidate. I chose to do it in Rust (even though we don't use Rust on my team) so that I could approach the problem with a fresh perspective and potentially learn some new things about my favorite programming language.

It turns out revisiting an old problem using a dramatically different programming language will teach you a lot! I wrote four different solutions using different approaches and patterns, which helped me better understand Rust's standard library and how to write more "Rusty" code. In addition it prepared me to better understand what the interviewee might do in the interview so I can ask good questions to see how they think.

But the biggest thing I learned through this exercise was how to write Rust declarative macros, which this post is all about.

Why Use Declarative Macros

I've never worked with a language that uses macros before and reading about them has always scared me a little. Code that writes other code, but with special syntax? Yikes. Since meta-programming can become extremely complicated, I've never reached for it to solve any problems, but I stumbled across a good opportunity to dive into it when writing tests for my interview answers!

While testing my potential solutions, I found myself repeating the same exact lines of code over and over, with minor variations:

// Tweak the test array to check the different conditions in each test
let test_array = vec![3, 3, 4, 2, 4, 2, 4, 4];

let first_result = find_answer_1(&test_array);
let second_result = find_answer_2(&test_array);
let third_result = find_answer_3(&test_array);
let fourth_result = find_answer_4(&test_array);
// Add another line here in every test when a new function is made

assert_eq!(4, *first_result.unwrap());
assert_eq!(4, *second_result.unwrap());
assert_eq!(4, *third_result.unwrap());
assert_eq!(4, *fourth_result.unwrap());
// Add another line here in every test when a new function is made

In addition, whenever I added another solution to the problem I had to update multiple lines in every test case. It was becoming a real headache, and it only got worse with every new problem solution function I wrote.

Thankfully, declarative macros are the perfect tool for writing repetitive code with minor variations!

Now instead of writing all those lines for each test, I only needed to do the following to test each case:

test_find_answer_functions!(
    // The answer based on the list below
    4,
    // The test's input data
    &[3, 3, 4, 2, 4, 4, 2, 4, 4],
    // The names of the functions I want to test
    find_answer_1,
    find_answer_2,
    find_answer_3,
    find_answer_4
    // Add another function name here whenever it's created
);

Isn't that dramatically better? It's extendable too, so when I get an itch to write another solution to the problem in the future, I can quickly tack it onto the end of the macro's arguments and it will also get tested.

How to Write Declarative Macros

So now that we've seen how a declarative macro can simplify writing code, let's dig into how to write them. The following code block is the final macro I came up with, along with a copious number of comments describing the syntax, since there are some different symbols used compared to writing regular Rust that you may not be familiar with:

// macro_rules! is the macro used to create declarative macros
//   test_find_answer_functions is the name of this macro
macro_rules! test_find_answer_functions {
    // Match macro usage where None is the expected output
    //   (matches the literal None, is not macro syntax)
    // - $test_array:expr - array of values to search for the answer
    //   (expr means any expression, e.g. vec![1, 2, 3])
    // - $function:ident - name of the function to test against
    //   (ident means an identifier, i.e. the function's name)
    // - $(__),+ - repeat 1 or more times
    (None, $test_array:expr, $($function:ident),+) => {
        // syntax for repeating based on the number of functions provided
        $(
            // Call function with test data, assert the result is None
            assert_eq!(None, $function($test_array));
        )*
    };
    // Match macro usage where generic type T is the expected output
    // - $answer:expr - value of T we expect to be the answer
    // - $test_array:expr - same array as the None branch
    // - $function:ident - same list of functions as the None branch
    //
    ($answer:expr, $test_array:expr, $($function:ident),+) => {
        // syntax for repeating based on the number of functions provided
        $(
            // Call function with test data and assert the result is the answer
            assert_eq!($answer, *$function($test_array).unwrap());
        )*
    };
}

Like I said before, macros are a bit weird. It's got a whole "who programs the programs" vibe to it that requires you to think about your code's structure differently, so I definitely ran into some issues when making the macro that wrote my tests for me.

If you ever want to try writing your own Rust declarative macros, you'll find a few of the roadblocks I faced written out below so you can avoid them yourself:

Issue 1

The first issue I ran into is that I didn't have a clear idea of what an :expr or an :ident was, so I was getting some weird errors. After reading through the metavariables section of the Rust Reference (which is a deep dive into the inner workings of Rust), I found my problem. I was treating my function name as an expr instead of an ident. Turns out expr is any valid Rust expression, like the value I wanted to test and the list of values to test against, and ident is any identifier, like the names I gave my functions. Little facepalm moment there, but solved easily enough.

Issue 2

The second issue was dealing with the two different patterns of test code. Some of my tests expected a result to be found, while some expected no solution to the test data provided. This led to two different assertions:

// For data with an answer
let test_array = vec![3, 3, 4, 2, 4, 2, 4, 4];
let first_result = find_answer_1(&test_array);
assert_eq!(4, *first_result.unwrap());

// For data without an answer
let test_array = vec![3, 3, 4, 2, 4, 2];
let first_result = find_answer_1(&test_array);
assert_eq!(None, first_result);

That pesky dereference (*) and .unwrap() in the example with an answer is totally different than the second example, where we only have to check the first_result Option to see if it's None!

Thankfully, Rust declarative macros support the same powerful pattern matching that Rust uses. In the macro code above, you'll see two different cases. One for None and the other for Some result.

And that order is important! When writing patterns, you want to start with the most specific at the top so that it's matched against before its more general version. Since None is a macro metavariable of type expr, putting the second pattern first would mean None matched that more generic pattern. I spent a few minutes stuck there until I remembered that particular rule of pattern matching.

Issue 3

Finally, I fought the borrow checker for a bit, since I couldn't easily tell what the final output of my macro would be. Rather than randomly throw * or & into my macro, I decided to finally figure out how to view the compiled code with the following command:

rustc --pretty expanded -Z unstable-options src/lib.rs --test

Now that's a bit more complicated than I would prefer for a debugging command, but it makes sense once it's broken down:

  • rustc is the Rust compiler
  • In order to use the --pretty expanded flag to preserve spacing after compilation, -Z unstable-options is required
  • -Z unstable-options requires the nightly compiler (which can be turned on for a single workspace using rustup override set nightly)
  • src/lib.rs is the name of the file to compile, which is the one I'm writing my code in
  • --test means to compile the test code, which I needed since my macros are only used in the tests

Unfortunately, that final command expands all macros, including the final code for things like assert_eq! and the #[test] attributes on the tests themselves, so it took me a little bit of digging to find my specific macro code. But once I found my macro's output, I could clearly see the borrow checker problem and fix it!

Why Not Use Plain Rust?

I could've written a solution using plain ol' Rust, avoiding macros entirely. The main reason I didn't was I simply forgot that was an option and finished the macro before I remembered that you could pass functions to other functions (which I absolutely love to do!).

I decided to write up a solution using functions. In the end I still feel like macros are a better fit in terms of ergonomics. I had to write two different functions, one for the Some case (when a solution is found) and another for the None case (when there is no solution):

fn test_find_answer_functions_some<T, F>(answer: T, data: &[T], funcs: Vec<F>)
where
    T: Eq + Hash + Debug,
    F: FnOnce(&[T]) -> Option<&T>,
{
    for func in funcs {
        assert_eq!(answer, *func(data).unwrap())
    }
}

fn test_find_answer_functions_none<T, F>(answer: Option<&T>, data: &[T], funcs: Vec<F>)
where
    T: Eq + Hash + Debug,
    F: FnOnce(&[T]) -> Option<&T>,
{
    for func in funcs {
        assert_eq!(answer, func(data));
    }
}

I could've written a single function that handles both scenarios by passing in an option as the answer parameter for the Some case, but that led to me writing this absolutely hideous answer argument as seen below:

test_find_answer_functions(
    Some(&4),
    &[3, 3, 4, 2, 4, 4, 2, 4, 4],
    [
        find_majority_element_two_loop,
        find_majority_element_two_iter,
        find_majority_element_one_iter,
        find_majority_element_counting,
    ]
    .to_vec(),
);

Wrapping the answer in a Some instead of a naked 4 like I could do with the macro was just too much for my perfectionist brain to handle.

In order to match the prettier user-friendly macro syntax, two functions were required. Even then, I had an ugly .to_vec() that I couldn't get rid of (although I'm sure there's a way to do so if I spent a little more time on it):

test_find_answer_functions_some(
    4,
    &[3, 3, 4, 2, 4, 4, 2, 4, 4],
    [
        find_majority_element_two_loop,
        find_majority_element_two_iter,
        find_majority_element_one_iter,
        find_majority_element_counting,
    ]
    .to_vec(),
);

In addition to the less than ideal user interface, the function approach requires a bit more of a heavy lift on the runtime side of things. A macro literally prints out code, and that resulting code can be optimized by the compiler. The functional approach happens dynamically at runtime, so there's a bit of a performance cost there. This toy example isn't concerned with performance, but it is something to consider when making a decision between the two approaches.

Wrapping Up

By this point you should have a high-level understanding of what Rust declarative macros are, where they might be helpful, and some potential issues you might face when writing your own.

While my example was a little contrived, it was a real-world usage of macros that made my life as a programmer a little bit easier and got me excited to look for more substantial opportunities to use macros in the future.

Further details regarding declarative and other macros can be found in the official Rust book, which is one of the best resources out there for learning the language and should be on the reading list of anyone wanting to become proficient in Rust. If you found this article interesting, I think you'd really enjoy the book. It's one of the best examples of approachable technical writing I've ever come across.


If you have questions or comments, feel free to reach out to me through any of the methods on my About Me page, or leave a comment in my guestbook!

COVID Summer

I thought this was supposed to be over. The vaccine would show up, everybody would take it, and life would get back to that "new normal" everyone was talking about.

But instead, we're seeing a fourth spike thanks to a brutal combo of the delta variant, vaccine hesitance, anti-vax propaganda, and a general unwillingness to make personal decisions while keeping the general public's health in mind.

Thankfully this hasn't been the worst summer. Last year easily takes the title for worst year ever. I've been able to see friends, regularly go to the climbing gym, and not constantly think about COVID. That was a sorely needed reprieve from the pandemic lock-down restrictions, but now we're sliding back into lock-down mode because this pandemic just isn't over yet.

I don't want to place too much blame on anti-vaxxers. While that group contains a lot of misguided people, there are a select few fraudsters at the root of it all looking to make a buck that should take the real blame. These leaders and their credulous followers are responsible for rising rates of all sorts of preventable infectious diseases, but I think we'd probably still be in the same spot if they didn't exist. The delta variant is brutal. It didn't originate in the US, so a fully vaccinated population wouldn't have stopped it from mutating. Sure, we'd not be facing shortages of ICU and hospital beds had more people gotten the vaccine, but we would likely be facing the similar restrictions that are being put back in place right now to prevent community spread.

Despite all my complaining, I'm pretty lucky. I'm in Washington, where the populace generally values scientific evidence and don't have a governor who is actively working to kill people (like the ones in Florida or Texas). The restrictions here amount to "wear a mask in public" and "please get the vaccine if you haven't yet". Seventy percent of my county is vaccinated, and while we have substantial community transmission right now, it's nowhere near other parts of the US or the world in general.

But I'm scared for the winter. If we get another spike like last year it's going to be the worst one yet. Delta is nasty, and unless something changes we will very likely have to go back to general lock-downs in order to save lives.

Lock-downs suck, as necessary as they may be. My mental health still hasn't rebounded from the trauma inflicted by the isolation and uncertainty of 2020.

Here's hoping the vaccine holds up over the long term so that me and my vaccinated friends and family can try to live some semblance of a normal life, but I'm mentally preparing for the worst.

The Future of the Web: Why It Doesn't Have to Be JavaScript

I am a professional web developer. I use JavaScript on a daily basis, but to be honest I harbor a bit of hate for the language. Don't get me wrong, it does its job and does it well enough, but... there's a reason TypeScript exists.

Despite its glaring flaws, JavaScript is currently the most widely used programming language in the world. JavaScript's stratospheric growth is largely driven by the growth of the Internet and web technologies. And while JavaScript exists on the server, it was born for the web. For decades it's been the primary way to write websites and that won't be dramatically changing anytime soon.

However, the future is on the horizon. WebAssembly (WASM) is a technology being developed as a new type of bytecode meant to run in web browsers. While WASM is relatively rare to see in the wider programming world right now, it has been supported in modern browsers for years.

Do you know what this means?

We're free.

Free from being forced to use JavaScript, a language thrown together in 8 days with some of the most confounding behaviors I've ever encountered in my years of programming.

So what do we do with all that freedom?

Work with a better language!

WASM is likely supported by your favorite language, and frameworks and tools for building web apps are being created and refined every single day. So the next time you need to build a website, give your technology selection a second thought.

It doesn't have to be JavaScript.

WASM + Rust

My favorite WASM-supported language is Rust (which you already know if you've ever had a conversation about programming with me). During the pandemic while I had nothing better to do with my free time, I read The Rust Book and fell in love with its thoughtful design and developer experience. I enjoy it so much that it's my goal to someday work with Rust professionally.

However, the web development ecosystem still needs a little more growth so it will be a bit longer before I get paid to write a web app in Rust. Other languages face the same barrier, but exciting projects like Yew (Rust) and Blazor (C#) are getting better each day.

Dicebag

Recently I decided to put WASM to the test with a serious effort to build a website completely in Rust, doing my best to select tooling and frameworks that replicate what I do with JavaScript/TypeScript during my day job.

The result is Dicebag! I regularly play Dungeons & Dragons and haven't been happy with the online tools my group and I have used, so I'm building tools that will help us have a better experience. As of this writing, it's an ugly, non-interactive Character Sheet, but it gets a tad bit better every time I work on it. If you're curious at checking out the code hop on over to the repository on GitHub. Contributions are more than welcome!

Despite the site not being very fancy, I am very happy with the progress so far regarding the tooling to facilitate development. Here's a short list of each framework or tool I'm using with its equivalent in JavaScript land (where applicable):

  • Trunk replaces Webpack
  • Yew replaces React
  • Rust-specific GitHub CI/CD actions

So far I've really enjoyed the experience with the tooling. None of them have reached version 1.0 at this point, but things are functional and you can produce a complete app with them. I'm sure I'll run into more issues as the site becomes more complex, but the basics are there!

My goal with Dicebag is to provide tools like character sheets, equipment and spell management, custom views to facilitate gameplay by presenting contextually relevant choices, a DM encounter builder, a dice roller, and more. It should be perfectly usable whether you're the only one using it in your group or if everybody is.

In addition, making this site a success will prove out the technology and give me a story to tell the next time I try to convince my co-workers to choose Rust on their next project. Plus, finding the pain-points allows me to contribute to the ecosystem's development by opening issues on GitHub or even contribute code to make the tools better.

We'll see where this project goes, but I'm excited!

Someday I'll never have to write JavaScript again.

Vaccinated!

I got my second poke yesterday!

A few hours after getting my second dose of the COVID-19 vaccine, I was very tired and took a five hour nap, waking up just in time to go to bed. Unfortunately, I woke up in the middle of the night soaked in sweat with a pounding headache and a variety of bad dreams marching through my brain.

It was an awful night, but I eventually got back to sleep and woke up at 10 A.M. feeling great. The next day was filled with dog park adventures, cooking delicious food, reading books, and playing video games.

Totally worth it.

Trading a night of weird dreams and restless sleep for catching COVID-19 is an excellent trade.

If you haven't gotten your vaccine yet, please do! It will protect you and others from an awful and potentially fatal sickness. The people in my life who caught COVID-19 had an awful time for weeks dealing with its effects.

Vaccines are safe and effective. I consider it a part of my civic duty and am proud to have done my part.

Let's play board games at my place soon, y'all!