“Doesn’t That Feel Better?”

Jatin Chowdhury
6 min readAug 30, 2021

--

A few weeks ago, I was fortunate to reconnect with a former co-worker whom I had not seen for several years. Let’s call him Chris. Since meeting with him again, I keep thinking back to one particular moment from our time working together.

I was working to implement a new feature in our codebase, and at the same time clean up some obsolete code related to that feature. Although I thought my implementation was nearly complete, Chris had taken a look at my code and pinged me on our company Slack server with a couple of suggestions. After going back and forth for the next few hours, and iterating through three or four options for the internal design of the feature, we finally settled on an implementation that we were both happy with. After sending Chris a code snippet outlining the final design, he sent me back a short message, four words that still ring in my mind today: “Doesn’t that feel better?”

I had a strange sensation as I read the message. On one hand, I was fresh out of an engineering degree, where I’d had professors drill into my brain that engineering was a “hard” science, where so-called “feelings” didn’t matter, only objective truths and the discoveries of said truths. But on the other, I knew exactly what Chris meant: the new design did actually feel better, even if I couldn’t quite explain it in words.

In the years since, as my academic, professional, and hobbyist endeavors continually require me to write code, I’m always striving to write code that doesn’t just work, but “feels good” to me at some instinctual level. The Internet seems to have plenty of articles talking about how to avoid bad code (often referred to as bad code “smells”). While avoiding bad code is always a necessary step to reaching good code, I wanted to share some lessons specifically about writing good code, in particular, lessons I’ve learned from Chris and other more experienced developers.

Refactoring

Probably the most crucial lesson I learned from Chris was the importance of refactoring. For the unfamiliar, refactoring is the process of editing or re-writing a piece code to improve the code quality without necessarily changing the functional behavior of the code. There are many dimensions to refactoring, and many reasons why you may want to do it. For example, renaming variables or methods in your code to be more descriptive is a common refactor that is crucial for making the code more readable. Another common refactor is to create “helper functions” that implement some shared logic which is required in multiple parts of the codebase. Helper functions can help reduce code repetition, meaning that if the shared logic needs to change in the future, the programmer only needs to make the change in one place. For more information on refactoring, I highly recommend Martin Fowler’s book of the same name.

Some IDEs will even suggest and automate refactoring for you!

These days, when I commit code to a project that I’m working on, it’s almost never the first time I’ve written that code. Typically, when I want to add a new feature, I’ll start by coding up a quick-and-dirty implementation, just to make sure that I can get it to work. From there, I’ll come back to the code and refactor it into a better state. After a two-three more iterations of refactoring, I’ve usually happy with the code quality, and feel ready to move on.

On a psychological level, refactoring requires the programmer to understand that their first implementation might not be perfect, and that as both program and programmer grow, the needs of the codebase may change as well.

“Where’s the pattern?”

Perhaps the most fun part of watching Chris code, or reviewing code that he had written was observing his ability to find and exploit patterns. I think all programmers know the feeling of writing repetitive code, and growing increasingly annoyed with it. Chris was a Jedi master of reducing previously repeated code into just a few lines of generalized code that somehow maintained the flexibility and extensibility of the original implementation, while also improving the code readability.

C++ “Lambda expressions” are a great tool for exploiting patterns.

The idea of pattern matching in programming is that it’s better to have 10–20 lines of important code that are continually re-used according to some pattern, than 100–200 lines of more repetitive code. A few times Chris and I talked about the “ultimate goal”: to implement a new feature while also reducing lines of code in the codebase. While that goal was often untenable, we came pretty close a handful of times, and even succeeded once or twice.

That said, finding patterns in code doesn’t come easy (at least not for me). While I’m much better at it than I was a few years ago, it’s still something I struggle with today, leading to hours and hours of staring at my code editor mumbling to myself “where’s the pattern, where’s the pattern?”

Zero-Overhead Principle

As an audio programmer, performance is often of primary importance for much of the code that I write. With that in mind, it’s very important that finding and exploiting a pattern as in the above section does not introduce any extra performance overhead. For a long while, I believed it to be an unavoidable tradeoff: that more general code was inherently less performant, and to improve performance, it was sometimes necessary to sacrifice a little bit of code generality.

A few months ago, I stumbled across an interview with Bjarne Stroustrup, creator of the C++ programming language, where he talks about an idea called the “zero-overhead principle”. The fundamental idea is that whenever you introduce a generalization or abstraction into your code, it should always be as performant as a direct implementation for a specific case of the generalized method. As Stroustrup says, “Yes, there’s a tension between efficiency and abstraction, but, you also get the interesting situation that you get the best efficiency out of the best abstraction”.

Hear Stroustrup talk about the “Zero-Overhead Principle” in C++.

Thinking about the zero-overhead principle in this way has been of massive importance to me in recent months, as I’ve spent a lot of time trying to design high-performance libraries for things like real-time neural network inferencing, and Wave Digital Filter circuit modelling. In a sense, I like to think of the zero-overhead principle as an extension of the pattern matching concept discussed above: finding a pattern can always help improve code quality, but finding the right pattern can also improve performance.

Encapsulation / Modularity

No Secrets! Declare every method and variable public. After all, somebody, sometime might want to use it… This makes it very difficult to later change the way that anything works under the covers.

-Roedy Green, How To Write Unmaintainable Code

One final element of programming wisdom that I would like to share has to do with encapsulation and modularity. One of my favorite features of C++ are the public and private keywords, that are used to determine whether a member variable or function can be accessed outside of its class. While it may be tempting to make everything public (as in the tongue-in-cheek quotation above), I’ve learned that doing so can be a good way to wind up with sneaky bugs that are difficult to find the source of. For example, if I have a class with some state stored in a member variable, making the member variable private makes it easier to track where and when the state is changed, and makes it less likely that I’ll accidentally introduce a bug by doing something that I shouldn’t with the member variable.

This idea of encapsulation goes beyond object-oriented programming, and is very important for library design as well. I’ve come to think of designing a class or library as similar to designing an entire user-facing piece of software: many of the software’s implementation details that are hidden from the end user, but the interface that is exposed to the end user should be easy to use, and expose as much control into the overall functionality as possible.

Conclusion

Anyway, I hope this discussion has been interesting. Since meeting up with Chris, I’ve been thinking a lot about these ideas, and wanted to take a short break from writing about low-level audio DSP to share some of them. Onward!

--

--