Taking longer than expected
Have you even spend so much time coding a solution - that you feel you should continue working on - even if it's taking far longer than you initially expected? The more you work on it, the longer the process seems to take. After endless hours of labour, you come to the conclusion that there is a simpler solution; but you hesitate scrapping your original solution due to the sheer amount of time and effort you've invested in.
What you experience is called the Sunk Cost Bias. Sunk cost in pure economic terms is a cost, that has already incurred and cannot be recovered. The bias is a behaviour that propels us to continue investing in a losing situation due to what it has already cost us.
How to notice and reverse the Sunk Cost Bias when coding
Start by listing down potential solutions to a problem. Then try estimating the amount of time it might take. Afterwards, choose the solution that you think is going to take less time while solving most of the problem.
Once you start working, keep track of the amount of time you spent. If you observe you've not been progressing much after a few hours, you can pause and reflect on whatever you are doing or scrap the current work, choosing another.
It's no surprise that you think of a better solution that you missed during your initial brainstorm because you now have more information.
This process is however more difficult to do in reality as we have the tendency to become attached to our work and we often take criticism personally.
A recent case study - Duplicate values
I am assigned a task of investigating why a list of orders have duplicated values - with only the TVA value changing. The table orders is joined with the tax one.
One order can have several tax records (one to many relationship). The inner left join retrieves a result sets including duplicate entries.
That's not the customer's requirement though. The orders should be unique.
I brainstorm with a list of potential solutions:
- use distinct keyword on the foreign key, which is tax (est: 15 mins)
- use order by tax (est: 30 mins)
- investigate aggregate on the tax field (est: 1 hour)
I experimented on the first two solutions for at least 3 hours before stopping. According to me, either of them should not take more than one hour, but at this point, none of these changes works.
I do not try aggregation as it's more complex than distinct, group by or order by. With the 3 hours more or less spent struggling, I nonetheless now better understand the situation and having researched more on the subject, I also revised some SQL. (nowadays ORMs do most of the heavy lifting)
So instead of forcing through the above solutions, particularly distinct or order by commands, which already cost me several hours, I try a new approach.
With the requirements stating only the list of orders, without any duplicate row, using the same JOIN query, I add a limit of 1, which only retrieves one tax record for each order, basically becoming a one to one relationship.
For me, no more duplicate - problem solved.