Today, I lit a bushel of money on fire. We still measure money in bushels, right?
As I was implementing the handler for my UpdateItemsCommand, I noticed we were potentially doing a lot of insertions. My perfectionism fired up, and I got to work on an “insertMany” method for an array. I spent some time balancing a clean API with performance, writing unit tests and benchmarks, and eventually got to the point where, in a contrived example, insertMany was about twice as fast as doing many insertions.
I pulled my new insertMany into the Command handler, did another benchmark and… the handler was ~0.01% slower.
In the previous article, I wrote that the most common use case was adding a single GalleryItem. Well in this case my insertMany is actually slower than just doing a standard insert.
Moreso, let’s pretend I WAS able to make the Command handler twice as fast (which is nonsense for many reasons). Would that even matter? The processing time is ~2ms. Reducing this to zero would be unnoticed by users. Even if thinking about operating at scale (foolish to do before having a single user), the cost of the network calls and writing to the DB would vastly outweigh this 2ms.
In the time I spent improving this handler’s performance, I could have spent making progress towards an MVP. How much is it worth to launch a day sooner and start validating and iterating on your business? Because that’s how much I burned.
All this is a long way to say: “Preemptive Optimization is the root of all Evil”.
Next time I hear the seductive whispers of performance optimization in my ear, I will try to ask myself “what is the business impact of making this code 10x faster?” And if there isn’t a good answer, move on to something more important.