The Pre-commit Verification Myths
"I pulled a fresh workspace from the latest label and ported my changes on it. I built all the images required and passed all tests with flying colours before committing. How come I am blamed for breaking the build? What kind of software development process is this?" You may have heard such arguments from frustrated developers, perhaps repeatedly. In this post we'll look at some common misconceptions revolving around the pre-commit quality verifications.
The pre-commit verification will prevent breakages/regressions
This is a myth, if taken literally. Yes, if done properly, pre-commit verification will normally catch plain faulty changesets. But such verifications independently performed by multiple developers in parallel are not capable of detecting failures caused by interferences between the changesets in-flight, simply because the changesets are not verified together. Only after the changes are committed such interference becomes visible - as breakages/regressions.
Wait! Is it even possible for a committed software change to cause a build breakage (or any kind of quality regression for that matter) despite successfully passing all prescribed pre-commit verifications? And I'm not talking about accidental causes like false positives. The short answer is yes. I'll illustrate with a simple example.
Let's assume the code in the latest version of a project branch includes a certain function, defined in one file
and invoked in a couple of other files. Two developers working in parallel on that project are preparing to make
some changes to the code.
Developer A reworks that function removing or adding a mandatory function argument and, of course, updates all
invocations of the function in all the associated files to match the updated definition.
Developer B decides to add an invocation of said function in a file which didn't contain any such invocation before
and is thus not touched by developer A's changes. Of course developer B is filling the argument list to match the
function's definition visible in the latest label. Which is the old definition, as developer A's changes aren't yet
committed.
Both developers correctly perform the pre-commit verifications with a pass result and proceed to commit their code
changes. Since the two changesets do not touch the same files no final merge happens, which typically would be an
indication of potential problems, warranting a closer look and maybe a re-execution of the pre-commit verification.
Nothing whatsoever can give even a subtle hint that something may go wrong.
Yet the end result is catastrophic - the build is broken as the function call added by developer B doesn't match
the function definition updated by developer A.
Who's to blame? Well, a proper analysis would recognize that no human error was involved and the process allowing the two developers to perform their pre-commit verifications in parallel is, in fact, responsible for the failure. But a superficial analysis may, incorrectly, lay blame on the developer performing the last of the two commits. Or even on the one performing the first commit, simply because the analysis may scan the commits in the order in which they were performed.
The above is just a simple, obvious example of a regression caused by conflicting changesets. Trivial to fix. But changes may interfere with each-other in more obscure ways, causing regressions much more difficult to investigate and resolve in a timely manner, like unacceptable system performance degradations for example.
This change is safe, it works perfectly in this other branch
It may come as a surprise to many, but this argument is heard way too often, particularly in discussions around porting fixes from one release branch to another.
The above example demonstrates how simple, normally valid changes can adversely impact each-other even in the context of the same branch and label. The chances of such interferences is much higher when the branch context itself changes as any changeset present or absent in the destination branch compared to the source branch may negatively impact the functionality of the changeset being ported.
As a best practice changes to an integration branch should be considered equally risky and subjected to the same verifications, regardless of being brand new or ported from some other branch.
Extending the pre-commit verification coverage should increase our branch stability
As mentioned above, individual pre-commit verifications, if performed in parallel, are unable to detect regressions caused by conflicting changes. The probability of such regressions increases with the number of parallel verifications, which increases with the total duration of the verifications and with the rate of changes being committed into the integration branch.
Extending the pre-commit verification coverage typically leads to longer verifications by:
- increasing the actual verification execution time
- increasing the queue waiting times if the availability of verification resources is limited
Unless the identification of a particular regression root cause is trivial, the cost of identifying a pair of conflicting changesets in a pool of suspects typically requires higher effort higher than the cost of identifying a single faulty changeset simply because the number of possible pair combinations is higher the number of changesets in the pool. Which makes the impact of conflicting changesets higher than the impact of simply faulty changesets.
It's not uncommon, especially in very large scale projects with high commit rates, for the increase in the number of regressions caused by the conflicting changesets to exceed the number of regressions prevented by the extended pre-commit verification coverage. In such cases the branch stability actually decreases and can lead to project congestion.
In summary, the pre-commit verification is not a silver bullet. Every project should evaluate how it makes use of it,
weigh its benefits against its limitations in that specific project context and adjust accordingly, if necessary.
It's worth mentioning the central, crucial role that the pre-commit verification plays in non-blocking CI systems.
But we'll focus on that in another post.