Lane Sawyer🌹

Just trying to leave the world a little better than I found it.

VOTE

Midterms are coming up. GO VOTE!

While I obviously have my preferences, I won't tell you how to vote, just to go do it.

There's one qualification on that: do not vote for election deniers. Our elections are safe and secure. It's a federated system that would take a multi-state conspiracy to enact, with thousands upon thousands of people needing to be involved. It's not happening. So don't vote for the idiots saying it is. That's how you get fascism.

You've only got until Tuesday, November 8th, so go get on it if you haven't already!

One awesome benefit of voting is that you are allowed to complain about how things are going in the country! If you're eligible to vote, don't vote, and then get mad at political actions, you have no right to say a thing since you didn't do the one thing that can make a difference.

So be like me!

Go do your civic duty and vote, drown your sorrows on election night, and bitch about the continual demise of the US on the Internet until the next election.

VOTE.

The Impact of Smart Tech, Faceless Corporations, and Labor Exploitation

Today my apartment complex locked me into my apartment.

Upon that realization, I had a panic attack.

It ruined my day to the point where I'm still on edge almost twelve hours later. Part of addressing the lingering symptoms of my panic attack is writing this blog post, where I'm sharing my experience to relieve stress. As such, it will likely be a bit more rambling and jumpy compared to my other posts. Panic attacks jumble your thinking, so please forgive the drop in quality here.

As a bonus, I'm going to use this experience to point out the very real consequences of smart tech, faceless corporations, and the exploitation of labor.

First off, before anyone worries too much, I'm doing fine. I contacted my apartment's front-office and the maintenance team got my door functioning again pretty quickly. I probably could've gotten myself out with the screwdriver I have in my closet without too much effort.

But there's no use reasoning with my body, which was still feeling the effects of being trapped in a concrete box against my will. No amount of rational thinking can stop the chemicals released during a panic attack.

Before I jump into pontificating about the dangers of smart tech and the corporatization of everything, here's what happened:

My apartment complex is installing "smart locks" that have a keypad and mobile app for managing the locking system. The company behind the smart locks sent their install team and attempted to install the lock at my apartment today, but the hub that is supposed to control the lock had issues so they said they'd be back at a later date once they got working hardware. Unfortunately, they had already installed the smart lock and had to re-install the old lock system on my door. Then the install team left, saying they'd contact me for another appointment in the future. No biggie, I thought.

Thirty minutes later I went to take my dog on his lunchtime walk, but when I turned the handle to exit, the deadbolt slid into place. The smart lock's install team put my lock back on backwards, so that the normally handy "unlock if you turn the handle" feature kept me locked inside. On top of that, the handle fell off into my hand!

I have never been locked away anywhere (at least that I remember, who knows what trauma lies in my childhood due to my garbage memory lol), so this was the first time I truly felt trapped and confined. My first instinct was to pound on the door and scream, in hopes that the installers were still around thirty minutes later (as I said, critical thinking skills aren't really available when you're in the grip of a panic attack).

I'd never felt such a visceral need to escape before, which triggered my panic attack.

It's been a while since my last panic attack, but I'm no stranger to them. During a turbulent period of my life when I was going through a divorce, finally got my anxiety and depression diagnosed, and COVID hit all within the course of about two years, I spent many hours crying in the shower or cowering in bed trying to overcome the unrelenting grip of a panic attack.

Thankfully I've developed ways to reduce their impact, but they're still no fun when they pop up. Even though I reigned in my thoughts quickly, the effects of a panic attack stick around for hours as my body flushes the chemicals pumped through my system during the initial attack.

And now that I'm feeling (mostly) better, I figured I would turn this experience into an opportunity to cover a few things about this crazy world we all live in that concern me:

  1. Smart tech and the rise of surveillance capitalism

  2. The uncaring, relentless pursuit of profit over human happiness from large corporations

  3. The exploitation of labor

Now how do these all connect?

The smart tech aspect is obvious.

This new locking system is run by some big company that has access to all the times I open and shut my door. Privacy laws are all but non-existent in the United States, and as a programmer I am acutely aware of just how much data you can glean from someone's smart devices. In addition, while researching what locks were going to be occupying my apartment I discovered that these particular locks emit a unique tone for each press on the pin, allowing someone nearby to literally hear your access code.

Holy fuck is that a major security risk!

The lock is literally designed to make it easier for apartment complexes to manage keys, not for apartment dwellers to live convenient lives. There's nothing preventing the company making all the data available to the apartment complex either, giving them more insight into my comings and goings than I'd prefer.

Look, I'm not opposed to technology being used to improve our lives, but I don't trust the companies running these systems. My own smart home setup is all managed from a local server that I run, where my data isn't being harvested by some company to sell to ad brokers. It's possible to use this technology in a privacy respecting, human centered manner, but that's not what the vast majority of smart home companies do because harvesting and selling data is a lucrative business.

As for large corporations doing anything to make a buck?

I went and talked to my apartment complex about the incident to let them know that their actions to change the locking system resulted in me literally being locked in my apartment. I never asked for this lock upgrade. I wasn't given a choice to opt out, only the "opportunity" to come ask questions to have my concerns "addressed". My dumb locks have worked perfectly fine for years and I have no need for a smart lock. And then, the incompetency of the company installing the lock ended up imprisoning me in my own apartment.

I asked for some small credit towards my rent or other way to make amends for literally locking me up against my will, but the receptionist said she had no ability to do anything of the sort and that management wouldn't consider the request even if I talked to them. The utter lack of humanity driven by the profit needs to a giant company that owns the building made me feel sub-human as my panic attack was completely dismissed.

If I were a landlord and my actions accidentally caused a tenant mental distress I wouldn't hesitate to do something to make up for it, especially knocking a few bucks off next month's rent. The fact that the only person present from the company managing where I live had no agency to make that decision shows a lack of trust in workers and a clear prioritization of making money over listening to the people whose very homes you control.

And the exploitation of labor?

The installers from the smart lock company were clearly not adequately trained and likely paid under a living wage. They couldn't install their own company's smart lock and then re-installed the old lock incorrectly, locking me in.

Simple steps that could be taught with a little bit of training (like not tearing off the old lock until you confirmed that your locks are fully synced and working) would've prevented the extra work and noise (which interrupted a work meeting I was in, by the way). And then, the laborers not fully testing the old lock after reinstalling it shows that they weren't trained regarding what to do when an install goes sideways.

I'm not a professional smart lock installer, but how do you not notice that the way you re-installed the old lock will literally trap a resident inside their own house? There's maybe three different states the locking mechanism can be in, and they didn't bother to test them all? I don't necessarily blame the individuals performing the work because of the system of labor we all deal with in the United States. Our lackluster labor protections lead to low wages, which leads to uncaring workers doing the minimum to get by. If you were paid a minimum wage, wouldn't you do the minimum as well? We should pay people living wages, treat laborers in all jobs with dignity, and provide adequate training. A more capable, caring craftsperson would have found their mistake before leaving a resident locked inside their own house.

And I'll leave it at that.

The effects of my panic attack linger, but it feels good to get words on a page. Each of the three areas deserve much more attention and analysis, but I hope my overview can at least express why I'm concerned about the direction our country and economy have been moving for the last few decades.

These decisions to prioritize profit over everything else causes very real harm. I'm lucky that my most traumatic experience bumping up against these systems was having a panic attack due to a botched lock install. Others aren't so lucky.

We could build a society where all are able to live and work with dignity. But we don't, simply so a few billionaires can score more points in the game of chasing the almighty dollar.

So what do we do about it?

VOTE.

Support politicians that want to improve labor laws, do some trust-busting, raise tax rates on the ultra-wealthy and use that revenue to build public services we all benefit from, and get some human-centered data privacy legislation in place.

Until we build such a world, we'll continue to be forced into a system where our every move is monitored and monetized, where faceless corporations ignore human suffering, and laborers are treated as disposable.

One Simple Thing: Switch to Firefox

The Internet is an incredible invention, likely to go down as one of the most consequential technologies in human history, right up there with agriculture, government, electricity, and industrial processes. For a large chunk of humanity, the web is already an integral part of our everyday lives. We pay our bills, chat with friends, apply for jobs, or even make a living from this incredible technology.

But the Internet would not be nearly as useful without another invention: web browsers.

Web browsers and their underlying technologies provided a generalization platform on which to build all the amazing web applications we rely on today. They're incredibly complex pieces of software that can still display websites that were coded back before the turn of the century. It takes giant teams of programmers to maintain existing code and add new features as the web continues to evolve.

Most of us don't really think about our web browser. We use whatever default comes with our operating system, both on mobile or desktop devices. And since Android (the most prolific consumer-facing operating system in the world) provides Google Chrome by default, it's no surprise that Chrome has held the majority of market share for almost a decade.

But there is another browser out there. One that's not controlled by one of the biggest companies on the Internet that's obsessed with vacuuming up every little interaction you have with your device.

Firefox.

Birthed from the ashes of the Netscape Navigator project, Firefox has been a major player in the web browser space for two decades! While the project has had its ups and downs, it's an incredibly capable web browser with some really great features.

Some of my favorite include:

  • Picture-in-picture mode for videos so you can easily watch videos while using another application

  • Multi-account containers, which are useful for separating work and personal sites, isolating cookies, or logging onto multiple accounts on the same website

  • Account sync that carries my preferences to any device I use

  • Privacy by default with excellent tracker blocking protections enabled by default

Chrome definitely has some of these features, but Google does not care about your privacy, which is the major sticking point for me. I'm not going to turn this article into one about why you should care about data privacy (that will be its own post).

But you should care.

Google, Facebook, and any ad-based company are making billions of dollars off of observing your behavior so they can serve you hyper-specific advertisements. Most of us have been so worried about governments spying on us that we forgot private corporations can become just as invasive into our personal lives.

Google Chrome is one of those data-collecting tools. Google Chrome's default settings allow Google (and anyone else running a website) to track you as you browse because of its permissive cookie permissions. In addition, Google has no interest in providing good ad-blocking technology since they themselves rely so heavily on ads to stay alive.

And that's the big change that's happening soon which has encouraged me to write this post. Google Chrome is moving to Manifest v3, which is a new version of the technologies that allow extensions to be created for browsers. This new version removes key pieces that allow for ad-blockers to be effective, meaning you're going to start seeing a ton more ads when using Google Chrome, even if you are using a great ad-blocking extension like uBlock Origin.

So if you don't want to start seeing more ads or care about privacy in general, there's really just one option:

Firefox.

Join me and millions of other people. It's a fantastic browser, and by using it you'll help make the web a better place by showing Google they can't just take all of your personal data.

The One Thing: Switch to Firefox

Microsoft Can't Unzip tar Files: My Azure Experience

Recently I was working on getting a basic Node.js REST API running on Microsoft Azure's App Service platform. I've only used AWS professionally, but I wanted to get a sense of what it would take to run a simple website off of Azure so I decided to give it a whirl.

The experience left a sour taste in my mouth and helped me understand why AWS is currently winning the Cloud Wars.

Let's start with the most jarring difference: documentation.

Azure documentation is sorely lacking in discoverability and usefulness. The documentation site is vast and likely contains a lot of good information, but it's organized in a way that forces developers to jump between entirely different sections of their website. And despite how detailed it gets in some places, other times it would inexplicably be missing explanations of configuration settings.

AWS documentation has its own issues, but I can generally trust that a basic walk-through will have everything I need to get started and each setting will be explained in the reference links the documents point you to in each tutorial.

Second issue: Azure's GUI-centric approach.

My favorite part about AWS is that (most) everything can be scripted with configuration files. Azure has the same feature, but I could not find a single tutorial that gave me enough information to write my own configuration file for the service I was using. Instead, I used a tutorial that walked me through building what I wanted in the Azure console and I then exported the configuration file for reproducibility in the future.

Not ideal.

With AWS, the config-first tutorials teach you what each part does and why it's needed, so you can do the whole thing without ever opening the AWS console.

On top of that, the exported config came with a magic token variable to connect GitHub with Azure, but I couldn't find a single reference as to how to change that later or create it to begin with if I were to not use the Azure console to configure my service.

Third issue: Azure can't unzip tar files!

After following a tutorial for making the Node.js REST API (which didn't even work when initially deployed), I noticed the upload of the build artifact to the server was taking over 10 minutes because the build output had not been archived or compressed. So I decided to archive it using the tar format instead of zip (since the build server was a Linux machine) so I could iterate more quickly on trying to get the API fixed, since it still hadn't been working up to that point.

After fixing what turned out to be some issues with environment variables, the API still didn't work. I poured over the code and couldn't find anything wrong. Everything worked on my machine and the templates matched what was in the tutorial, except for the file archive step I added to fix their slow upload speeds (which should've been my first clue). There wasn't anything obvious left to change!

It took a two-hour call with an Azure agent before they finally realized that the Azure service I was using could not open up tar files, so the API wasn't being fully deployed. I needed to create a zip archive instead. So I changed it, and the API started working!

I understand that zip is the archive tool baked into Windows, so naturally Microsoft will prefer using that to other tools. But it's not hard to write code that checks the archive file type and chooses the right tool when expanding the archive, so I was flabbergasted to hear that the archive format is what caused the problem.

If you know me at all, you'll know that I hate Amazon and tolerate Microsoft, so it's weird for me to be endorsing AWS over Azure. But my experience with Azure has really soured my desire to ever try using it again. I know Microsoft really cares about the developer experience in their .NET tooling ecosystem so it's strange that the same care has not been applied to Azure yet. Maybe they'll get to it someday, but AWS is still the easiest to use at this point.

Anyway, that's more words than I ever thought I'd write about file compression tools so we'll leave it at that. I'm just glad it's working now and figured my tech savvy readers would get a chuckle at how one small assumption can cost you hours of your life.

It's Time to Upgrade Your JavaScript Developer Tools

JavaScript is everywhere. As the most popular language in the world right now, it's almost unavoidable. Especially if you're building things for the web. I personally hate JavaScript and tolerate TypeScript, but it's currently the best option for building websites, so I use it every day at work.

While WASM is promising and will eventually bring all major programming languages to the web, we're stuck with JavaScript when writing code for web browsers. But thankfully, all of the developer tooling we use can be written in languages other than JavaScript. Over the last few years this truth has slowly been realized as new developer tools have been written, released, and adopted by large portions of the JavaScript community. esbuild, parcel, rome, and more are taking market share from other tools written in JavaScript because of the pure speed gains that you can get from using fast, modern languages like Go and Rust.

The speed gains are substantial. For example, at work we replaced the following tools with newer ones written in Rust and saw the following performance gains:

  • Webpack (55 second average) to Parcel (23 second average) = 32 seconds saved

  • ts-jest (30 second average) to @swc/jest (15 second average) = 15 seconds saved

While we didn't ultimately choose Rome due to it being a very new library that we want to see develop a bit further before committing to, code formatting tools written in Rust could improve our build times even further:

  • Prettier (14 second average) to Rome (8 second average) = 6 seconds saved

That's a total of 53 seconds saved between the build, running tests, and verifying code formatting consistency. Not bad! This saves workers time waiting for a local development server to spin up (which usually happens multiple times a day, especially if you're working on multiple code bases), plus you get those time savings on every one of your CI builds.

If you adopt these speed gains across your entire organization, that's a lot of time and money saved if you're paying for something like GitHub Actions, where usage is charged by the minute.

Plus, there's the added benefit of (theoretically) saving energy, which in turn reduces your application's overall CO2 footprint. Building and using efficient systems is one of the biggest things the software engineering profession can do to play our role addressing the impending climate change catastrophe.

So why not make the switch?

Obviously there are a few concerns:

  • Familiarity with tools. It can be difficult to learn new systems. Thankfully, many of these projects are written with existing workflows in mind (@swc/jest was literally up and running with a single line of code being changed), but adopting any new tool comes with a new burden of learning its intricacies.

  • Time needed to make the switch. Even though some tools are easy to swap in, not every one will be. I think I spent around a full day of work over three months hacking together the Webpack to Parcel.js migration. It wasn't particularly difficult to do, but if you don't get the time or budget for upgrading tools approved, it's hard to make any changes.

  • Old tech is proven tech. These new tools are untested. What's to say there won't be a major breaking bug that appears later? Do you really want to commit your team to an unproven technology? Again, the ease of changing tools mitigates much of this risk, but it's a risk nonetheless.

If you can get over those concerns, there are likely some newer tools that have come out in the last few years that might be worth a look. They were definitely worth it for us.

If you have any success stories related to upgrading tools in order to save time, feel free to share by reaching out to me on social media.

Remote Work is a Life Changer

Now that I've been working remotely for more than two years, I figured it's worth sitting down to hammer out my thoughts and reflect on what I do and don't like about a fully remote job. I'm going to try to be careful to separate remote work from the realities of pandemic life, but since the pandemic is ongoing, it might be difficult to tease out the differences.

Recently my job has started allowing the technical folks back into the office (the scientists have been on-premise the entire pandemic), but I've only been in a handful of times, mostly to meet up with specific co-workers or attend a company event. Even with the option of going in and sitting at my own desk, I haven't really felt the need at this point, although I'm sure I'll be going in more often as COVID becomes less of an issue.

But despite having a desk and an office, I'm still "fully" remote and I intend to keep it that way. Let's dive into the pros and cons!

Pros

Night Owls Rejoice

I'm a night owl in a world built for early birds. Remote work has been a game changer for me. In the before times, I was forced to commute to an office and be there by the unreasonably early time of 8 AM each day for no discernible reason other than "my boss said so". At its worst, I drove 45 minutes each way to a client's work site on the complete opposite side of the metro area for over a year. Waking up at 7 AM to sit in traffic just to be in a cubicle working by myself is the perfect example of how stupidly we had structured software development work up until the pandemic.

Now, I can wake up fifteen minutes before a 9 AM meeting, make some coffee, review my notes, and positively contribute to the meeting objectives. All while actually getting some decent sleep that aligns with my body's needs! I can't imagine what the years of waking up at 7 AM and fighting my circadian rhythm has done to my lifespan. In fact, let's blame that for my hair loss!

My Desk, My Way

After getting a new job and realizing the pandemic was going to last a lot longer than expected, I upgraded from an IKEA kitchen table pulling double duty as a desk to a full-on standing desk setup, complete with multiple monitors, a microphone mount, ergonomic keyboard, comfortable chair, whiteboard, and various office supplies to help me organize my notes.

While I've had some decent desks at some of the more generous client sites I worked at, I've never had a work setup as nice as this one. And now, even if I change jobs in the future, I'll get to keep using the setup I've carefully curated. The first company I worked for was unwilling to spend the money needed for employees to have a comfortable working environment, so now that I know what I'm missing, I regret the six years I spent putting up with cheap office equipment.

Productive Breaks

Back in the dark ages when micro-mangers forced you to come to an office so they could look over your shoulder to make sure that you were wearing the right clothing and doing your job exactly the way they would, breaks were practically useless. The vast majority of them were hanging out in the break room talking with fellow co-workers about nothing in particular. Sometimes I'd take a walk, but most of the time I was in a business office park with very few places to walk. Sometimes there weren't even sidewalks! (Fuck car culture, but that's another post.) If you had personal errands to run, you were out of luck unless they were something you could do from a smart phone.

Basically, breaks were times you sat around dreading going back to work.

Now that I work from home, my breaks are so much more enjoyable! Since software engineering is largely a creative discipline, inspiration comes and goes with the flow of the day. Many times when I'm stuck on a problem, my dog will ask for a walk. That's a great time to take a break, let my brain process the problem, and often I'll have an idea while out enjoying some fresh air! In addition, I can do minor chores throughout the day, use my lunch break to explore and engage with my neighborhood, or have access to my personal computer to play a video game or spend some time working on a volunteer or open source project.

Cons

Every Day Is The Same

Granted, every day also felt the same in the office. But at least I saw a complete cast of co-workers, with lots of people I wouldn't have the opportunity to see outside work. That created some fun variety in my day, since I got to swap stories with people completely different from me! In addition, I got to interact with people on the bus or on the highway. I sort of miss feeling the thrill of an idiot cutting you off in traffic or causing a dangerous situation that might kill me in a crash! That's exciting, albeit not particularly welcome.

Now that I'm working remotely, I tend to see the same people. That is if I see anyone at all. I have a couple hobbies like rock climbing that gets me out and about with my friends, but even that starts to feel the same since it's typically the same people participating.

Without a deliberate effort to go try new things, especially activities that involve strangers, each day starts to feel the same as the day before. It wasn't much better when commuting to the office, but it was slightly better.

Getting Stuck Inside

Because I don't have a commute that takes me to a physically different location, sometimes I look up to see an entire day has passed and I haven't left the apartment. This was particularly true when we weren't allowed to leave our apartments due to the pandemic, but even now it sometimes happens by accident when I'm not deliberate about getting outside and doing something that isn't work. Most of the time I'll do a lunchtime and end of day walk with my dog, so this isn't always an issue, but spending so much time in one place isn't very fun.

I could fix this by mixing up where I work. The roof of my apartment complex, one of the dozens of coffee shops nearby, or even my desk at the office are all great options. I just need to be more mindful and deliberate about doing so! Overall, not a terrible "con" to have.

Work is Always Around

When you live in a 500 square foot apartment, there isn't much room for a separate workspace. My wonderful, amazing, perfect workspace I've built does pull double-duty as the desk for my personal computer as well. I'm able to tuck away my work laptop and notes once I'm done for the day, but it's always sitting there and thus is always on my mind to some degree.

Ideally I'd have a separate room for work that would be easy to avoid after I'm done working for the day. It's the main reason I'm considering upgrading to a two-bedroom apartment next time I move, but have you seen real estate and rent prices these days? Even as a well-paid software engineer, I'm not a fan of spending much more on rent than I am right now. That's just the cost of living in a city, I suppose.

I'm sure there's something more I could do to hide my work materials at the end of the day, but no inspiration has struct yet. Something to consider when I next upgrade my desk setup.

Conclusion

Now that I've deftly used the rule of three for each section, we can call this post complete. As you can see, the pros vastly outweigh the cons. I've really enjoyed remote work, and I will never again do five days a week in the office, unless they pay me a ridiculous salary.

Is there something I missed? What other delights or issues have you run into working remotely? Feel free to reach out to me on my blog's guestbook or on any of the social media accounts to share your own experience.

Why I've Yet to Publish a Blog Post on Veganism

Surprisingly, I haven’t written anything about one of the most meaningful decisions I've ever made. About six years ago I became a vegan!

This isn't a secret to anyone who knows me, but I also don't really bring it up unless it's absolutely relevant (like when making sure I'll have food to eat at various events I attend). I would like to bring it up more often, since it's an ethical belief that I hold dearly and I want others to consider making the same choice, but talking about veganism can be a bit touchy.

Part of this is due to the stigma/discrimination vegans face when bringing up the topic. On the Internet you run into a lot of "found the vegan" comments with an eye-roll emoji anytime animal rights come up and a vegan stakes out an ethical claim. Some men find veganism challenging to their view of masculinity. Other folks just don't like to think about where their food comes from and any reminder causes them to lash out at vegans because of the cognitive dissonance they feel when being reminded of the suffering their food choices cause. And a good chunk of people are just afraid of change.

But the biggest reason I haven’t written about veganism is that it can make others feel uncomfortable. Just like religion, veganism carries an inherent “I’m right, you’re wrong” ethical aspect. I’m vegan primarily for ethical reasons, meaning that, in my view, animals should not be exploited in any way, primarily because they can't give consent to sacrifice their bodies for our use. Because it's an ethical stance, my decisions to forgo all forms of animal products as much as possible in my life insinuates that anyone who isn’t vegan is not living ethically, and that message is heard by the other person whether or not I explicitly say it.

Food is extremely important to culture and is a way to bond with others. Rejecting a meal because of an ethical choice implies that the person was unethical for preparing it, and they can take that personally (especially when they don't have a good understanding of what veganism is). I can't tell you how many times I've had to turn down home-baked goods because they weren't vegan. It's not fun, because social norms dictate that you should enjoy the food that others share with you. It marks me as an "other" and someone who has to have their needs specifically catered to in order to participate fully in food-related activities.

Because veganism is an ethical framework, it carries the same social pitfalls as discussing religion or politics. My messy exit from Mormonism taught me that I need to stay quiet regarding sensitive issues if I hope to keep my friends and family around. When I first became an atheist, I shouted it from the rooftops. Through intensive study and thought, I had discovered that Mormonism (and all supernatural worldviews) don't appear to be based in an objective reality. But all that missionary training from growing up in a Mormon household and going to Ecuador to try to convert people to Mormonism taught me that I should be loud and proud about sharing my innermost truth with the world.

So I did.

But when I challenged the worldviews of my Mormon friends and family, I was unfriended on social media, excluded from social events, threatened with expulsion from college if anyone in the BYU administration found out, and even shunned by some family members and in-laws.

I didn’t want to make the same social mistake with veganism. While I had found a wonderful new lifestyle that dramatically decreased the cruelty I inflicted on the world and wanted everyone to know, food — like religion — is a deeply personal subject. People don’t just ditch decades of dietary habits just because a vegan showed them a video of male baby chicks being ground up hours after birth because they can't lay eggs.

Overall, I do bring up my veganism fairly regularly, but mostly in the context of ordering food in a group. If I didn’t bring it up, I often would have literally nothing to eat. So pretty much everybody knows I’m vegan, but I try not to be obnoxious about it precisely because of the social stigma it can cause.

That's the eternal conundrum of vegans. We don't mean to be pushy, but many foods aren't vegan by default. If we don't ask for an accommodation, we'll go hungry. While I typically have an emergency stash of nuts for those situation, sometimes that's just not possible.

All that said, I still feel the need to publish something on the topic. I've been sitting on this blog post for over two years now, usually only updating it after an argument I had online with someone who wanted me to shut up about my veganism. I'm doing that right now, in fact. I expressed disappointment that my favorite writer didn't have any faux leather options for the special edition versions of his books. And sure, asking for a book that "isn't wrapped in the skin of a corpse" is not a tactful, albeit accurate, way to phrase it, but even if I had been more polite about it the downvote police would've come anyway. In my experience, it doesn't matter how I phrase things. Unless I'm in a vegan-friendly space on the internet, any comment tangentially related to veganism is rejected by the larger community. Which means I often don't say anything at all.

But I don't like feeling silenced, so that often leads me to be blunt and do things like describe a leather-bound book as using the skin of a corpse. Is there personal development to be made there? No duh. But fuck, why do I have to be the one to be the bigger person when the default worldview is that it's okay to slaughter hundreds of thousands of adolescent animals each day just to eat?

Time to take a breath, Lane.

Veganism is important to me, and I do wish more of the world would go vegan. But I don't expect it. The world is already full of so much pain and suffering, so I understand why some folks don't care to think about the animals when we still live under a global system that produces unacceptable levels of human suffering.

I don’t expect anyone who reads this to immediately switch to a plant-based lifestyle. I sure didn't. I lived in Dallas, TX when I tried to go plant-based. A place where most folks don't even know what the word vegan means. It took me a good year or more to fully transition, partially because of the hostile anti-vegan, pro-meat culture of Texans, but mostly because it required rewiring some of the most ingrained habits I had in my life.

Really, this post is just getting my frustrations onto paper. It's not fun being the butt of jokes. It's not fun going hungry because there are animal products in every dish at a party. It's not fun being stereotyped as an annoying, loud-mouthed idealist (even though it's very much true in my case). It's not socially fun being a vegan. There's a reason why a good number of my friends are vegan and vegetarian. We have to stick together because nobody else wants us around. I attribute that to the cognitive dissonance people feel about their treatment of animals, and having a vegan or vegetarian around reminds them of that. But there's also the possibility that maybe we are just a bunch of annoying fucks. Regardless, I've found my people and they are wonderful. They make my life so much better, so does it really matter if the wider world is annoyed by our existence?

But today is the day I actually hit "post" on this thing. I'm done leaving this as a draft, as imperfect as it may be. I know this sounds whiny and privileged, because it very much is. I'm a cis, straight-passing bisexual man living an upper-middle-class lifestyle. What right do I have to complain about some dumb folks on the internet or the occasional meal that I choose not to eat?

I know I haven't made a great argument at why someone might want to go vegan. In fact, this post is likely to scare folks away. Who would want to get yelled at on the internet, excluded from food-centric work/social events, or make family dinner more difficult?

But I do invite you to look into it more. There are so many great resources online that cover the what, why, and how of veganism. If you're interested in delving deeper, even if it's just to learn more about veganism so you can be sympathetic to me or another vegan friend you have, I recommend checking out the following resources to learn more or find vegan recipes:

SEATTLE IS HOSTING THE WORLD CUP!

I'm so stoked! Obviously, FIFA is a garbage organization run by criminals, but also... THE WORLD CUP IS COMING TO SEATTLE.

So yeah, mixed feelings, but I'm excited to show the world how amazing our emerald city really is.

When Are We Going to Do Something?

I said I'm not writing about this again, but I will continue chaining together my periodic posts about gun violence in the United States every time something particularly egregious happens again.

We just had the racist shooting in Buffalo. Now we've got the senseless Uvalde, Texas shooting.

At this point I've given up hope that we'll do anything regarding gun control. There is so much we could do without even coming close to running afoul of the 2nd Ammendment, but we don't because our legislative branch has been broken for decades.

But I've already written too many words. I said I wasn't going to write about this again. Please reference my past work. The points I make there still stand.

And fucking call and email your Senators. Senseless death due to lackadaisical regulation of firearms shouldn't be a partisan issue.

SEATTLE SOUNDERS ARE CONCACAF CHAMPIONS

We won the CONCACAF Champions League title tonight! It was the best soccer match I've ever attended. We set the CONCACAF Champion League attendance record with 68k+ people screaming out hearts out as we scored each of our three goals to win the championship!

Next up for the Sounders is the 2023 FIFA Club World Cup against some of the best clubs across the world.

And now that our CCL run is over we can get back to focusing on MLS play.

What. A. Game.

What. A. Team.

What. A. City.

I love Seattle.

Hacking Legacy Sites for Fun and (Non)profit

Audience

This post is written for an audience of software engineers and assumes general Internet experience. Some definitions are provided below to provide context for those without a background in developing software.

Definitions

  • GDPR (General Data Protection Regulation): A European Union law focusing on data protection and privacy. California has a similar one called the CCPA (California Consumer Privacy Act). There is no federal law in the USA providing data privacy protection.
  • Cookie banner: Those annoying cookie notifications you get on every new site you visit asking you to choose how closely you want the website to track your behavior.
  • Google Analtyics: Google's analytical platform for tracking user behavior. Used by a mind-boggling number of sites.
  • API (Application Programming Interface): Enables applications to exchange data with each other using a documented interface. A major revolution in computer science that enabled the software industry to grow so quickly.
  • JSON (JavaScript Object Notation): A standardized format for representing a JavaScript data as human-readable text.
  • regex (Regular Expression): An esoteric way of searching through text using patterns. For example, this regular expression was written by Satan himself to match email addresses: (?:[a-z0-9!#$%&'+/=?_`{|}~-]+(?:.[a-z0-9!#$%&'+/=?_`{|}~-]+)|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\[\x01-\x09\x0b\x0c\x0e-\x7f])")@(?:(?:a-z0-9?.)+a-z0-9?|[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?).){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\[\x01-\x09\x0b\x0c\x0e-\x7f])+)])

Recently at work I had to fix a few legacy websites with broken cookie banners after we did a major GDPR compliance effort across all the publicly accessible websites. These sites were initially created 14 years ago and haven't been updated for many years. It's a technological wonder that they're still up and running, but they're still there!

Unfortunately, their old age makes delivering updates difficult. And thanks to some technology choices that broke the modern cookie banner code, there were some updates that needed delivering.

Thankfully, those sites already had Google Analytics. Besides being able to track your every move on a website, Google Analytics has the handy feature of remotely delivering code snippets! That's actually how the cookie banner software is delivered to these old sites in the first place. So instead of trying to figure out how to resurrect extremely old deployment infrastructure, I decided to first try to hack together a solution to fix the broken cookie banner software and patch the website via Google Analytics.

That effort turned into the hackiest code I've ever written. It's ugly, nonsensical without the context of the problem at hand, and uses browser APIs I hardly knew existed.

But it works!

And that was the key point. We have no plans to actively return to those legacy sites and provide new updates. All that mattered is we were compliant with GDPR. Were we actively maintaining those sites or had major rework for them on the horizon, I wouldn't have turned to my hacky solution. I showed what I wrote to a couple of good friends and they were rightly horrified at what I had done.

But again, it works!

So let's take a look at the code.

First up, I added a forEach method to the JavaScript String prototype.

...

Yeah. It's that bad.

The good news is since forEach on a String makes no sense, the site doesn't already try to do that somewhere, so there are no conflicts!

But when we look at the actual implementation, it gets worse.

Theoretically, in a sane world, forEach on a String might be a method that loops through each character in a string and lets you do something with it. That would make a bit of sense and can already be done in JavaScript, just not using forEach.

But that's not what I did. I discovered that the cookie banner broke because we had a String instead of a JSON object. But Strings can be turned into JSON!

"So", I thought, "what if I turn the String into the JSON object the code expected, then do the forEach stuff that was supposed to happen anyway on my newly created object!"

Turns out, that actually worked 🤣

String.prototype.forEach = function(originalForEachFunction) {
    var stringToJSON = JSON.parse(this);
    stringTOJSON.forEach(originalForEachFunction);
}

However, the journey wasn't over. While that fixed the error I was seeing and got the cookie banner to appear, I noticed there was an error when accepting any cookies! Apparently, the cookie banner would make a server call to record what preferences were selected.

I dug into the code and discovered that the network call was failing because the same String I turned into a JSON object earlier was still a string later when it should be an object! That's because the code above didn't actually modify the string at all.

At this point I thought I hit an impasse. There was no obvious way for me to insert myself into the code like I did earlier with my vomit-inducing String.forEach hack.

I let my brain stew on it for a while. That evening, I listened to a new episode of Darknet Diaries, a phenomenal podcast that tells stories about the darkside of the internet, mainly focusing on hackers and computer security. It's one of my favorite podcasts, and it reminded me that I should think like a hacker regarding my cookie banner program.

And what would a hacker do?

Intercept every single network call, look for the data they're interested in, and modify it as needed!

While I typically don't work during the evenings, this problem and idea was burning a hole in my head and I had to try it out immediately.

So there I sat, the faint glow of the computer lighting up my face in the dark room, digging up browser API documentation on how to peek at every network call being made. That night of hacking led me to create this monstrosity, which involves XMLHttpRequest, regex replacements, and lots of null checks (I modified the code to simplify what's going on and to provide some minor obfuscation, so imagine something even worse):

var originalSend = XMLHttpRequest.prototype.send;

XMLHttpRequest.prototype.send = function(data) {
    if (data && data.brokenFieldName) {
        data.brokenField = data.brokenField.replace(/\\\"/g, '"').replace(/\"\[/g, '[').replace(/\]\"/g, ']');
        data.brokenField = JSON.parse(data.brokenField);
    }

    originalSend.call(this, data);
}

It's horrible and I hate that my brain came up with it, but it works!

The best part about it is that I never would've been able to come up with such a bonkers idea earlier in my career. I'm at a point where I feel extremely comfortable with web development technologies, meaning I now understand what is available to me and how I can bend the rules. That kind of mastery feels incredibly good once you're there and the feeling of getting something working in a non-traditional manner is the heart of the hacker spirit. Makes me think I would've had a solid career as a white hat hacker in another life!

Anyway, I hope you hated that code as much as I did. The hack has been humming away in production for a few weeks now and works flawlessly.

And before you ask, yes, I heavily documented what is going on with the hack in several places so that people won't be confused when they find my monster a few years down the road.

Until the next hack,

/Lane

What Should We Expect From FOSS?

Audience

This post is written for an audience of software engineers and assumes general Internet experience. Some definitions are provided below to provide context for those without a background in developing software.

Definitions

  • Free and Open Source Software (FOSS): Software with published source code that anyone is free to use, study, or modify
  • JavaScript: The world's most popular programming language
  • Node Package Manager (NPM): An online collection of JavaScript code and associated set of tools that software developers use to share their work with others
  • Package: A bundle of code that can have different versions, allowing for software to be updated over time without forcing code using it to immediately upgrade
  • Protestware: A portmanteau of protest and malware, with malware being a portmanteau of malicious software
  • Software License: A document associated with a software project explaining how other developers can use, modify, or share the code

Yesterday, a new vulnerability was reported in the National Institute of Standards and Technology's National Vulnerability Database regarding some "protestware" that was added to a popular JavaScript NPM package that gets about 1 million downloads a week.

The owner of the node-ipc package updated the code to add a 1 in 4 chance of deleting the contents of all the files on your computer and replacing them with the ❤️ emoji if you had an IP address that came from Russia or Belarus. This affected versions 10.1.1 to 10.1.3, meaning a patch version inappropriately delivered this breaking change.

Later, the owner of the package removed this behavior and correctly published a different form of protest as a new major version (11.0.0) that uses the peacenotwar "protestware" package (which was also written by him). Using node-ipc will now put an anti-war message in a text file on the user's desktop, instead of modifying existing files on the user's system. This happens for all users, not just those with IP addresses from Russia or Belarus.

While the more malicious ❤️ emoji update was not available for very long, it still effected many projects and people, including popular ones like the Vue CLI, a developer tool to facilitate building websites. One person even claimed to be part of an NGO that lost thousands of files they were collecting to document Russian war crimes.

This whole thing has caused a bit of an uproar in the online developer community. People are flooding the node-ipc and peacenotwar repositories with issues calling the developer a Nazi or expressing disappointment because the protestware will damage the reputation and trust of open source software. And even more people are watching the deluge of comments with interest, since this is not the first time a developer has updated a popular NPM package to send a message to the broader software development community.

As a software engineer myself, I fall into that last group of interested spectators. All this has been fascinating to watch and has led me to closely examine my beliefs about what it means to use and develop Free and Open Source Software (FOSS) and how I can prevent something like this most recent NPM issue from affecting my team.

So with that context, let's dive into the actual article: What should we expect from FOSS?

Software Licensing

First, let's start by looking at how software licensing works in the open source community, and whether this particular protestware broke the terms of its license.

The license for node-ipc is the popular and flexible MIT license, which offers the software "as is", to be used however the user wants. peacenotwar is licensed under the stricter GPL-3.0 license, which requires any modifications to be published under the same license and the source code be made available.

While I'm not a lawyer, my understanding is that both licences absolve the developer of any liability for issues that arise from using it. This is common in licenses often chosen by open source software, so it's not surprising to see them in this case. But many of the people upset about node-ipc seem to not understand that downloading software from a random person on the internet comes with no guarantees, especially given the MIT and GPL-3.0 licenses attached.

From my perspective and experience, node-ipc and peacenotwar are following the terms of their license, even while providing undesired functionality in an updated version of the node-ipc package.

What can this tell us about open source software?

To put it harshly: you get what you paid for and this software was free.

Open source is about making sure the source code is easily accessible. It has nothing to do with quality. For every amazing piece of open source software, there are hundreds of awful ones. I should know, since I've written some of the useless ones! All one has to do is look at the GitHub profile of a random developer and you'll stumble across a pile of code that is technically open source, but it is not (and never will be) worth using.

The lesson here is: understand that open source software licenses promise you nothing, other than that their source code will be publicly available for examination.

Versioning

So if there isn't an open source license that protects the user from malicious code updates, what could prevent open source software from delivering malware?

Versioning. Theoretically.

In an ideal world, every update to software would be closely vetted by a team of experts who verified it behaved correctly before being published for the world to use. In that perfect parallel universe, even if a malicious update got past the expert team nobody would download that update before checking it themselves and it would never be set to update to an unchecked version automatically.

Alas, we do not live in such a paradise.

NPM uses Semantic Versioning, which is a widely used standard for labeling new versions of software. But it's just a convention, so there is nothing preventing a developer from breaking the rules when creating new versions. That's what happened with node-ipc, since it introduced the file-destroying protestware as a "patch" update. Patches are used for non-breaking changes like fixing bugs or make updates that do not break anything for the end user.

Clearly, wiping files on the computer is a breaking change, so the owner of node-ipc broke the versioning "contract".

Software development relies on an incredible amount of trust. When you use someone else's software, they often have used some other person's software to create it. This leads to a long chain of dependencies, meaning your website to share pictures of cute animals was ultimately created by the work of hundreds or thousands of people. That trust and sharing of quality software is a major part of why there's been incredible growth in the tools available to software engineers and the resulting applications being produced.

But it does have its downside, which was clearly on display with the node-ipc update.

That trust is exploited by the default behavior of NPM when adding new software dependencies. NPM uses the compatible with version by default when determining dependencies, which will apply new patch versions for packages automatically when running a very common NPM command (npm install). While this can be helpful for quickly distributing software updates like bug fixes or performance improvements, it should not be the default precisely because people can abuse Semantic Versioning.

Because of the default behavior of a widely used tool, any developers that did not take the extra time to lock their package versions could have woken up a few days ago to a hard drive full of ❤️ emojis.

Engineers should take the time to understand the tools they are using and how software versioning behaviors could impact their code, but the reality is that most don't. Take me for example. I didn't completely understand how versioning worked in NPM earlier in my career even though I had been using it for years and I'm the kind of person who enjoys reading software documentation for fun! Many software engineers face tight deadlines. Unfortunately, things like dependency analysis and reviews don't happen for a good portion of newly written software.

Looking back at node-ipc's versioning, there is now a version 11.0.1, which is a new major version that prominently states that the tool now contains the peacenotwar package, which is far less malicious than the original protestware. This is versioning done properly. While the new version still delivers unwanted functionality, at least node-ipc is now following versioning standards when making noteworthy changes.

The lesson here is: lock your dependencies and review any software upgrades closely. Open source software does not guarantee that there will be working software or proper versioning. The whole point is to be open and free to everyone and that includes incompetent or malicious actors. You really should vet any new code you did not write yourself before using it.

Is Protestware A Good Way to Protest?

Part of why I wanted to write an article examining this incident and how it relates to expectations in open source is because of the word "protestware". That's a new term I hadn't stumbled across before, and it seems like it's new to most of the wider development community as well.

The situation between Russia and Ukraine is incredibly hard to watch, and I feel deeply for the people of Ukraine who are being unjustly invaded by an autocrat trying to leave his mark on the world. I've got a tinge of fear because I live in Seattle, which could become a target if Putin decides to whip out the nukes. When I saw that a decently popular package on NPM decided to create some havoc for Russian users, I initially chuckled and thought that was a clever way to make a statement. The idea of protestware inherently appeals to me. Especially when used for a cause that I believe is morally just!

I imagined some Russian hacker following Putin's orders to hack a US power plant waking up one day to nothing but ❤️ emojis, ruining his whole day and screwing up his spy work. That's an incredibly satisfying image. I'm having another laugh imagining it just now.

But that's not the reality of the situation.

Internet attacks know no borders. It's entirely possible that some grandma living in Canada got hit because her ISP just bought some IP addresses that used to be located in Russia. Or (if that NGO claim I mentioned earlier is true) some desperate Ukrainian's reporting of a war crime is lost forever because they died from a bomb the next day. Or a Russian anti-war activist loses a valuable spreadsheet containing the contact information for a nationwide network of activists. Or an MIT software engineering student is using a VPN to watch some Russian soccer games and runs the protestware, losing his entire dissertation.

There's so many ways the initial node-ipc protestware could've hurt innocent people.

Which puts me in an interesting position regarding how I feel about it.

Governments have imposed economic sanctions on Russia. Companies have pulled their business. The global banking system kicked Russia out of SWIFT.

All of those actions hurt innocent people too, but I largely agree with what's being done to dissuade Putin from continuing his invasion. While economic sanctions will hurt Russians who bear no responsibility for what's going on, they are less damaging than a full-on war.

So why can't an individual make a similar choice to attempt to inflict non-physical damage on Russia?

I lean towards supporting the idea of protestware in general, and tolerating this particular situation. The developer screwed up by introducing the file-modifying change as a patch version instead of a major one and not disclosing the change. That broke the social contract for delivering open source software and will damage his credibility going forward. But philosophically he has free-reign to do whatever he wants with the open source software he created, so it's hard to completely condemn him for trying to do his small part in protesting the Russian invasion of Ukraine using the skills he has at hand. It's something that could have caused real damage, though we'll likely never know the true extent. I wouldn't condone this particular functionality change, since I think there are less-damaging ways to get the same message across.

The updated version that leaves an anti-war message on a user's machine is a much easier call for me.

I think it's a brilliant way for a software engineer to make themselves heard. But there is no doubt that it would be incredibly annoying for those using that software. That is, after all, a major point of protests. They don't work if nobody notices!

However, were I using the node-ipc project I would have lost respect for the developer and the entire project because of the protestware. I get why people are incredibly upset enough to the point of spamming the node-ipc repository with angry and hateful issues directed towards the developer, even if I think many of the messages go too far and constitutes online harassment. I don't envy him trying to clean it all up and move on from this either.

Overall, I'm going to lean on what seven years of consulting taught me. The answer is: "it depends". There is a proper place for protestware. Software is a form of speech, so I think it should be protected to a reasonable degree, which includes forms of protest. Just as there are bad and good ways to hold an in-person protest, that holds true for doing it in the form of software. That line will no doubt be difficult to walk, as it is for any protest.

What Should We Expect From FOSS?

By this point, I hope I've convinced you that open source software is a grab bag that promises you nothing and everything all at once.

I love software engineering precisely because of open source. I know of nothing like it in human history. Millions of hours have been dedicated to creating software that is given away for free, to be remixed and built upon. That has led to some incredible leaps in digital technology over a few short decades. FOSS, as a concept, is a technological marvel that should be up there in importance next to the discovery of fire and agriculture. It has the potential to radically transform the world. For good, or bad. Just like any powerful technology.

But those lofty expectations should come with a dose of reality. As we saw with node-ipc, there's danger in blindly accepting open source software from other people before reviewing it yourself. The problem comes from making that a reality. Software engineers use so much software that it would be practically impossible for every developer to understand every dependency change.

It would be great for tools like NPM to make changes that prevent malicious or undesired updates from occurring in the first place. That's something we can push for in the open source community. Software engineers never met a problem that couldn't be solved with more software! 😂

Until we get immaculate tools that save us from ourselves, here are some specific actions that can be taken to secure our projects from being impacted by this kind of protestware in the future:

  • Get your software from respectable institutions that have a track record of releasing quality code.
  • Lock your dependencies so that you are only ever making a conscious decision to upgrade.
  • Review release notes for any new code you are including in your software.
  • Contribute to open source software by writing good code or reviewing the code of others to make sure it's working as expected.
  • Write your own code where possible. While you don't want to reinvent the wheel, be deliberate about what software you are using.
  • Learn about the tools you use and how they work. Don't forget to think about potential attack vectors!

In conclusion, we're probably going to see a lot more protestware in the future as software continues to be an ever larger part of our lives. The node-ipc issues remind us all that open source software offers no guarantees. While FOSS is amazing, its downsides should be recognized and considered when choosing to use new open source software. Security teams need to become more commonplace in the industry, and better ways of establishing and maintaining trust for FOSS developers and users would make it easier to sleep at night when updating your dependencies.

Ultimately, it's up to software engineers to protect their systems from malicious actors. To do so means understanding where FOSS code comes from and using that knowledge to set realistic expectations for what open source software can do for us.

Pandemic Life: Year Two

Year two is over!

I figured I would write a follow up to last year's post about what it's been like to live in a pandemic. I was desperately hoping there would be no need for a second one because the pandemic was over, but here we are.

Thankfully, the naive optimism of my first year post largely worked out despite the pandemic entering its second year of changing the world. I was fully vaccinated in May and got my booster in December. Thanks to that I was able to see friends and family way more often than in 2020.

While there was still considerable risk in 2021, it wasn't as terrifying to go out, especially when the people I was with were vaccinated. As far as I can tell I didn't get COVID despite having done all of the following with friends and family:

  • Regularly rock climbing at the gym (I climbed my first V4!)

  • Snowboarding trips to Snoqualmie

  • Camping trips to the Olympic Peninsula

  • Spending over a month in Utah, with frequent family events

  • Dining in at restaurants

  • Regularly attending D&D and board game nights

Even with that fairly busy list (as least for an introvert like me), I only ever had some slight sniffles and aches once or twice over last year but I never tested positive for COVID. Seems like the vaccine worked pretty well!

Just in the last few days my county has removed its mask mandate. Feels incredibly weird walking through my apartment building without a mask, and I'm still wearing it when I go to somewhat crowded places. But it seems like we're heading in the right direction. The worst of this is (hopefully) over.

When does this thing officially become an endemic? That change would be nice. I know other parts of the world are not doing as well as my neck of the woods though, and I hope they can get all the resources they need to finally wrangle COVID down to endemic status.

But despite all the ups and downs, crazy news stories, a budding war in Ukraine, and countless other awful things that happened around me in 2021, this year of the pandemic was definitely better for me than the first.

I'm really hoping I don't have to write another one of these next year. 🤞

Static Code Analysis: Reducing Your Team’s Cognitive Burden

Have you ever run into a pull request that seemed impossible to merge? One with hundreds of comments from a dozen people, with two folks passionately arguing about choosing variable names, which language features to use, or whether to delete that unused method that might get used someday. Nobody can seem to agree on a set of standards, and with no ultimate authority to turn to, the code review devolves into a contest of wills.

Those pull requests from hell result in a lot of wasted time for a software engineering team. Don't you wish you could harness that extra time and funnel it back into building a quality product?

That’s where static code analysis comes to save the day!

Static code analysis is the process of analyzing source code against a standard set of rules. These rules vary based on programming language, business domain, and team preferences, but practically every major programming language has a decent static analysis tool that can be added into your team’s regular workflow.

Static code analysis can be accomplished with a variety of tools and methods. This article is going to talk about just two of them: types and linting. If you don't have either added to your team's workflow, those two are a great place to start.

Types

Programming languages can generally be separated into two camps: those with strong types and those with weak ones.

Strong types include languages like C++, C#, and Rust. Weak types can be found in languages like Python and JavaScript.

In general, types are a way of structuring the data in your code and are checked at compile time. This means bugs related to the type of data you're manipulating are caught up front, as part of the development process. A weakly typed language leads to bugs that happen at runtime, which can lead to a bad user experience or errors in production environments.

Some weakly typed languages have ways of adding in types, so don't despair if your team is already using a weakly typed language. TypeScript is a great example that extends JavaScript to include types. If your tech stack has a way of using types, you should absolutely be using them!

Some programmers, especially those who have never used types, can be hesitant to add them to their codebases. It's one extra thing to learn, and when you switch from being able to run your code immediate to having a compiler yell at you before you can even run the code, the experience can be a bit jarring.

But it's totally worth the upfront cost.

Let's look at a simple example of fetching data from an API in JavaScript:

function fetchData(id) {
    return fetch(`https://my-api.com/data/${id}`);
}

function doSomething(id) {
    const data = fetchData(id);

    // what can we do with data?
}

Do you have any idea what sort of data you'll be getting from the server? Even if you remember right now, will you be able to answer correctly a year after writing the code? Our brains are not perfect records of everything we've done, so at some point you'd have to look at the documentation (if there even is any) or hit some breakpoints while running the code to figure it out.

But sprinkle some TypeScript in there and life gets so much better:

interface MyApiResult {
    id: number,
    name: String,
    address: String,
    city: String,
    zipCode: String,
}

function fetchData(id: number): MyApiResult {
    return fetch(`https://my-api.com/data/${id}`);
}

function doSomething(id: number) {
    const data = fetchData(id);

    // We can easily use anything listed in the MyApiResult interface!
    console.log(`Hello ${data.name}. How is ${city} these days?`);
}

Now we can immediately see that fetchData will return some basic user information. While this example is a bit contrived, having a whole team working on a codebase and not being able to immediately see what fetchData does results in a bunch of wasted time looking at documentation or manually running the project and triggering the workflow that runs the code.

Types are the most important type of static analysis, especially as team size grows. Programming is all about manipulating data in a computer, so why shoot yourself in the foot by writing code that ignores what that data looks like?

Save your team brainpower for problems more important than the shape of your data and get yourself a language with a type system!

Linting

The other major piece of static code analysis worth adding to your team's workflow is a linter. Linting is the process of analyzing code for bugs, performance issues, proper use of language features, and stylistic choices to ensure code consistency.

Most modern languages have some sort of linting system. Some are built into the language, like Rust's cargo clippy command, while others arise from community efforts, like JavaScript's eslint.

However, initially setting up a linter can be difficult to do on a team. Remember those arguments about code style or the proper language features to use in PRs? A linter codifies that into a standard set of rules that everyone's code can be checked against. So the team will have to agree on what those rules should be and then the computer can enforce compliance with every new addition to the codebase.

The biggest gain from a linter is consistency. Even if you don't like particular linter rules, your team doesn't have to argue about what the code looks like during every pull request. A good team is full of people who will value consistency over the "perfect" linter configuration, so you should strive to pick sensible defaults that everyone can live with. Using a popular configuration is one way of quieting even the noisiest developer, since a configuration that's good enough for hundreds of thousands of other people will be good enough for your team.

Once a linter is installed, make sure it runs automatically and that you have gates in place to not merge any new code until the linter is happy. Without a hard blocker, linter errors can and will seep into your code over time, eventually leaving you with thousands of errors or warning that end up getting ignored by the team instead of addressed. This leads to code rot, performance issues, and a generally unpleasant developer experience when you're faced with a wall of doom anytime you see the linter run.

Conclusion

Programming is a creative endeavor, and human brains only have so much capacity each day. By eliminating thought from entire classes of issues, your team will be free to focus on the things that truly matter: solving problems that users of your system face.

A strong type system and sensible linting rules are two great ways to reduce your team's cognitive burden, allowing you to get more done with less time. Automation is the name of the game in software engineering, and having a computer check code against a set of rules is the perfect use of CPU cycles.

Don't spend your precious time arguing over pointless semantics. Use static code analysis tools.


This is the fifth of nine articles delving into the processes that every effective development team should use. Stay tuned for more!

Book Review: This Is How You Lose the Time War

Go read it.

This Is How You Lost the Time War is one of the most beautifully written pieces of fiction I've ever read. I even read parts of it out loud because the words were that delicious.

I don't read out loud.

Ever.

I loved this book too much to write a detailed review. I'm still reeling from the experience and I can't wait to read it again.

In short, it's a love story scattered through time and space, giving you a peek into the worlds of two intergalactic time soldiers while leaving a tantalizing universe hidden between the lines on every page. It's a intimate tale of godlike spies who find themselves having more in common with each other than their own communities and how they hide their budding relationship from their own hivemind-like transhumanist(?)/alien(?) collectives.

I'm at a loss for words because nothing I write will ever be as gorgeous as the poetry within its pages.

This Is How You Lose the Time War is a lovingly crafted puzzle-box of a novel that deserves a place on your shelf.

Go read it.

Yew Hooks with GraphQL

Over the last year or so I've been occasionally hacking away at a web app called Dicebag, which will eventually become a collection of useful tools to facilitate in-person Dungeons & Dragons games.

Part of this project stems from my lack of satisfaction with other tools I've found. Most tend to focus on running a game online or preparing for games in advance. I'm wanting something that enhances the player and DM experience by presenting contextual data depending on what's happening in the game, keeping players off their phones and engaged in the story.

I'm a React developer by trade but a Rustacean at heart, so I decided to write it using the Yew framework, one of the more popular Rust web frameworks. It's been really fun so far! The app is ugly and non-functional except for a janky initiative tracker I just put in place, and even that is far from polished.

Regardless of the messy code and unpolished UI/UX, it felt great to put together a useful, generic custom hook for making GraphQL requests using Yew and the Rust graphql-client crate.

This post is a short walk-through on the anatomy of my custom GraphQL hook and ways I'd further like to improve it.

So, let's take a look at the hook! The code below is heavily annotated with comments I've added for the purposes of this blog post to explain Rust concepts, the libraries I'm using, or things I'm particularly happy with!

First up, the example GraphQL query we'll be working with:

# Query to fetch a campaign by ID. If none are provided, return all campaigns
query CampaignsQuery($campaign_id: Int) {
    campaigns(id: $campaign_id) {
        id
        name
        description
    }
}

Now an example usage of the use_query hook:

// Example usage of the campaigns query within a Yew functional component

#[function_component(CampaignsPage)]
pub fn campaigns_page() -> Html {
    let variables = campaigns_query::Variables { campaign_id: 1 };

    // I'm particularly happy with the user experience on this hook.
    // All you have to do is choose the query you want to make by specifying
    // the generic parameter's struct and pass in the variables for that query.
    // Can't get much simpler than that!
    let query = use_query::<CampaignsQuery>(variables);

    // ... use the query results to display campaign #1
}

And finally, the hook code itself:

// The code for the use_query hook

// `graphql-client` crate builds all the types for you just by looking at the
// GraphQL server schema (which is auto-generated with a CLI command)
// and the query you wrote (which was the first code block in this post)
#[derive(GraphQLQuery)]
#[graphql(
    schema_path = "src/graphql/schema.json",
    query_path = "src/graphql/queries.graphql",
    response_derives = "Clone"
)]
pub struct CampaignsQuery;

#[derive(Clone)]
pub struct QueryResponse<T> {
    pub data: Option<T>,
    pub error: Option<String>,
}

// The query itself! There are three trait bounds, all related to the
// graphql-client crate types. The `Clone` and `'static` bits are needed
// to fulfill the lifetime requirements of the data here, since this is
// going to be used with in the context of a Yew functional component
pub fn use_query<Q>(variables: Q::Variables) -> QueryResponse<Q::ResponseData>
where
    Q: GraphQLQuery, // GraphQLQuery is the trait provided by the graphql-client crate
    Q::Variables: 'static, // That trait also provides a way to specify the variables
    Q::ResponseData: Clone + 'static, // And the type you expect to get back
{
    // Local state to keep track of the API request, used to eventually
    // return the results to the user
    let state = use_state(|| QueryResponse {
        data: None,
        error: None,
    });

    // Now we get to the part of Yew that isn't so nice. I've got to clone 
    // the state so I can move it into an asynchronous thread, since Yew hooks
    // can't do async without spinning up a local thread
    let effect_state = state.clone();

    // This works identically to React's `useEffect` function
    use_effect_with_deps(
        move |_| {
            // As stated earlier, we spin up a thread in order to use
            // the asynchronous API call code
            spawn_local(async move {
                // `build_query` is another nicety provided by the GraphQLQuery type
                let request_body = Q::build_query(variables);
                let request_json = &json!(request_body);
                // reqwest is a nice Rust http client
                let request = reqwest::Client::new()
                    .post("http://my-server.domain.com")
                    .json(request_json)
                    .send()
                    .await;
                // Set the data or errors as the results dictate
                match request {
                    Ok(response) => {
                        // Turn the response JSON into the expected types
                        let json = response.json::<Response<Q::ResponseData>>().await;
                        match json {
                            Ok(response) => effect_state.set(QueryResponse {
                               data: response.data,
                               error: None,
                            }),
                            Err(error) => effect_state.set(QueryResponse {
                                data: None,
                                error: Some(error.to_string()),
                            }),
                        }
                    }
                    Err(error) => effect_state.set(QueryResponse {
                        data: None,
                        error: Some(error.to_string()),
                    }),
                }
            });

            // The "cleanup" function, just like in React's `useEffect`
            // Since there's nothing to cleanup here, we write an empty function
            || ()
        },
        // The `useEffect` dependency here is `()`, the unit type, which is
        // equivalent to passing `[]` in React's `useEffect`
        (),
    );

    // Return the state's value to the user so they can use the API result!
    (*state).clone()
}

Isn't that cool? It has a simple API that I'm excited to use. Writing it felt similar to React with some pain points that come from Yew being a developing framework and the verbosity type system in Rust, but I'm quite enjoying the development process in this tech stack.

Writing the hook took me a few iterations to get the API right, since I'd never written much Rust code dealing with generics and trait bounds. In fact, as of time of this writing you can see at least one older version still in the codebase because I haven't migrated everything over to the new and improved one yet.

Initially I had my own Response and Query types with weird lifetimes that were annoying to write and use because I didn't understand that I could dig into the ResponseData type on the generic Q trait with the GraphQLQuery bound. Going through this exercise forced me to better understand lifetimes, Clone, and generics, so I'm happy I spent the time iterating on it.

Potential Improvements

loading Field

Some GraphQL hook libraries provide a loading field on the data structure so you can tell if you're still waiting on the API. I'm conflicted on adding this, since you can discover if the API has returned by checking if data or errors is a Some value.

But it's not hard to add and simplifies if statements for users of the hook so I'll probably add it in once start using the hook more heavily and feel that annoyance myself.

Improved Errors

Right now I'm just smashing the errors into a string. Ideally I'd return them in a structured manner, but I just haven't gotten to that yet.

Refreshing the Query

Given that the use_effect_with_deps has a () as its dependency, this query will only run on the first time the component using it renders.

Ideally I would have better control over when the query refreshes, especially in scenarios where you add something new and want the UI to update. It might be easier to just pair it with another hook that lets you refresh the whole component, or maybe it's a new parameter to the query.

Time will tell. I'm not nearly close enough to caring about that kind of thing in the Dicebag app yet!

Support For Any GraphQL Client

Right now it only works with the structs produced by the graphql-client crate. That's what I use in my project, but if I were to export this hook for general use it would be nice to switch up the types as needed. I'm not even sure I can make the hook that generic, but it would be a useful learning opportunity to stretch the bounds of generics until they break.

Conclusion

Yew's hooks are fun! Writing my own taught me a lot more about Yew as a framework, generics, trait bounds, lifetimes, Rcs, and more.

Yew is still developing as a framework, but I'm excited to see where it goes. It already rivals React and other top JS frameworks in terms of speed, and that's with a small volunteer community working on it. WASM has a bright future, and because of that, Yew has an opportunity to play a big part in the Rust web development space. I enjoy working with it so much that I'm hoping to contribute to the project myself. And if I'm lucky, maybe I'll even get paid to write Rust on the front-end someday!

If you have any feedback regarding the hook or this post, feel free to open an issue on my repository or reach out to me on the social media platforms on my About Me page!