Agile code, refactoring, and FOSS projects

Developing 11tyCMS has taught me so much about the impact of agile code on productivity. In early alpha, the code was inconsistent and difficult to follow. There were many quirks, all causing frequent bugs, slowing down debugging and draining my energy. It was starting to put me off working on it.

I spent so much time in "MVP mode", I lost sight of the quality I was aiming for: adaptable code that worked with me, not against me. With work, the time and energy it difficult to achieve. But, knowing I'm opening the source code soon motivated me to refactor. I wanted a code base that other developers could work with, without worrying they'd struggle to understand everything. A FOSS project can only be as good as the contributors you're lucky enough to have!

Knowing when to refactor

None of this is to say working toward an MVP is bad. On the contrary: as a kid, I used to start with achieving perfection, getting the perfect code... Thing is, this is literally impossible when you're starting from nothing. Worse yet? It meant I never finished any projects. I got so obsessed with trying to create the perfect end product from zero that it froze me. You have no idea what the parameters of your problems are until you've got an MVP out the door. Once everything functions, you have a better idea what the scope of your variables, functions and files should be. At this point, you can think about rewriting what you've done to be more elegant, reusable and understandable.

There comes a stage after you've reached MVP where inspiration floods in for new features. You get excited, there's so many ideas. You get to work, let's implement that new feature! But you find that your code is a little inflexible and difficult to read. It's not impossible though, it just takes more effort. You spend a few hours implementing this new feature, maybe adding a few extra quirks to the code to make it work. The new feature works! The code could use a refactor. But time is short and it's a side FOSS project after all. A week goes by, you get busy with your job. You have less energy to work on your project now, the code's rigidity and quirks kinda puts you off. It feels like you're fighting with your code, and you just... Give up for a bit.

3 months pass, work has died down a little and it feels like you have more time and energy for your beloved FOSS project. So you fire up your IDE, and start working on yet another feature. You're really excited, there's so many ideas to work on! But then you realize what you left behind all those months ago: inflexible code that's difficult to read, with quirks bolted on top to implement the other features. It becomes clear: you can't implement these features until you've refactored the code. Not without wasting precious time and energy, which are already at a premium for side projects. It's at this point you start thinking about how you can refactor.

The power of refactoring

The problem with the refactoring phase is that... It depends on how late you left it. For many reasons, the MVP stage and all the quirks that come with it may have well overstayed its welcome. If your project is still small and there's not too many files and lines of code to change, its not too painful. But if its a bigger project? The devil you know starts looking very alluring. Particularly with all the friction involved with restructuring everything, and relearning your muscle memory to work with the refactor. Thankfully though, my relentless drive for writing maintainable and clear code makes it difficult to NOT do this early on.

A good example of this is my Electron project: 11tyCMS. It was a project I started on holiday, I rarely had the energy or time to work on it outside of work, as is often the case for many FOSS project maintainers. This also contributed to a vicious cycle. The time I can spend on it is very sporadic, and refactoring looked like such a huge task, it felt difficult to justify. But I hit a wall: there was that feature, one I literally couldn't implement unless I did some major refactoring. So I made a start... I began restructuring how I did IPC in my Electron app.

In Electron, your app is split in two: you have the "main process" and the "renderer". Main is where all the Node JS code is, the renderer is where the "browser" side of things are. IPC is the mechanism allowing the main process and the renderer to communicate with each other. So in 11tyCMS's case, this allows me to call NodeJS functions from my React code.

In Electron, you have to register every function if you wanted it exposed via IPC. You register it under a channel in the main process, and then you expose it to your renderer via the preload file. Each time you have to refer to your functions perfectly, with the correct channel name, and all the correct arguments (baring in mind that, in preload, you have to get the order right with event being the first argument). This made creating new functions a nightmare. I would frequently mess up the order of arguments, forget channel names. This meant constantly switching between different files to confirm I was getting things right, that there were no typos. This needed to change!

In the end? created a new file: functions.js . In here, I import all the files that have functions I want to expose to the renderer (you just need to export your functions in an object), and I pass them into an object called functionsByChannels each key being the name of the files I imported. Then, I have a for loop that checks for the names of each function in these imports: if it's name starts with an underscore, its a private function to be kept to the main process. Otherwise? put it in the exposedFunctions object that the functions file exports. In the end? I end up with an object that has consistent channel names based on the file names, along with the correct consistent function names and argument orders:

import files from './functions/files';
import site from './functions/site'

let exposedFunctions = {}

const exposeChannelFunction = (func, channelName) => {
    exposedFunctions[`${channelName}`] = func;
}

let functionsByChannels = {
    files,
    site
}

for (const channelParentName in functionsByChannels) {
    for(const childFuncKey in functionsByChannels[channelParentName]){
        if(childFuncKey.startsWith("_")){
            continue;
        }
        console.log("Registering ", childFuncKey, " under ", channelParentName)
        exposeChannelFunction(functionsByChannels[channelParentName][childFuncKey], `${channelParentName}:${childFuncKey}`)
    }
}
export default exposedFunctions

Now, in the main process, all I have to do is import the functions.js file, and register the functions to IPC:

for(const channelName in exopsedFunctions){
    ipcMain.handle(channelName, (handle, ...args)=>exopsedFunctions[channelName](...args));
}

Finally, we're now ready to expose these to the renderer:

let api = {}
for(const channelName in exposedFunctions){
  console.log('registering function', exposedFunctions[channelName].name, 'with channel', channelName)
  api[exposedFunctions[channelName].name] = (...args)=>ipcRenderer.invoke(channelName, ...args)
}
const registerApiFunction = (func, channelName)=>{
  api[func.name] = (...args)=>ipcRenderer.invoke(channelName, ...args)
  ipcMain.handle(channelName, func)
  contextBridge.exposeInMainWorld('api', api)
}

...

contextBridge.exposeInMainWorld('api', api)  

And bam. Now all I need to do is call: window.api.functionName in my renderer, and I have what I need!

This tweak changed everything. Because now, all I needed to do was add a function to a file's export to have it exposed to the renderer, adding new features was WAY faster. Development picked up pace again, I had more energy to work on it, things took less time and it felt good to work with. With these energy gains, I had more motivation to do even more refactoring... So I implemented Zustand into the React side of the app.

Zustand is by far my favorite state management library for React. Its so lightweight and simple, but its impact is profound. No more drilling props through 5 different components, only to lose track of where they originated and getting weird bugs as a result. Just centralized store files for each type of data, each one integrating beautifully with this IPC system I've built up. Upon implementing this, the React side of things became joyful to work with. Everything feels so clean, and easy to work with now. If I have a problem with data? I'm not sifting through 10 different component files to find the culprit. Again, giving me even more energy and time to work on my project. So I began working on routing, which again, made it even better to work with.

Can you see the domino effect a well timed and placed refactor can have on a project? Especially for FOSS. When time and energy is so limited, you want a code base that works with you not against you. The easier it is to work on, the less energy it will take to work with. Better yet? New contributors will have a much easier time adding new features and donating their time for bug fixes. Everyone wins!

This isn't to say that the code I have now is perfect. Far from it, not even sure if that's possible! But there's one thing I'm sure of: the quality of my code base is no longer a barrier to adding new features, and I think that's a healthy sign of a good refactor.