Some forms of defensive programming hide bugs, YOUR bugs.
Other forms enforce design-by-contract style enhancing both quality and maintainability of the software.
Choose wisely...
Lil' longer version:
Of course you know what "defensive programming" is, but perhaps you haven't heard it being called so until now; let's see it in code:
public class NetworkFoo {
/**
* Performs connection.
* @param host Machine where server is running. Must not be null.
* @param port Port server is listening to. Must be in the range 1-65535.
*/
public void connect (String host, int port) {
if (host == null) {
return;
}
if (port < 1 || port > 65535) {
return;
}
...
}
...
}
Sure you have seen -or written- code like that, and that is just a form of defensive programming. The purpose of the techniques named under such term aim at making software robust against unexpected situations.
That sounds quite reasonable, however, many times the programmer does not fully understand the implications of how she chose to address the checks she wanted to be performed. To understand that, let's see a variant of the code above:
public class NetworkFoo {
/**
* Performs connection.
* @param host Machine where server is running. Must not be null.
* @param port Port server is listening to. Must be in the range 1-65535.
*/
public void connect (String host, int port) {
assert host != null;
assert (port > 0 && port <= 65535);
...
}
...
}
Here we have replaced 'if' blocks with assertions, so now instead of silently returning upon malformed parameters, the execution simply stops, as one of the preconditions is not matched.
What is the difference in method 'connect' of these two pieces of code? its preconditions and postconditions are NOT the same in both cases:
- In the later, preconditions are: 'host' shall not be null and 'port' shall be between 1 and 65535.
- In the former, although specified in the method header, there are no preconditions regarding parameters, but instead there is an extra implicit postcondition which states that when 'host' is null or 'port' is not between 1 and 65535, the method simply does nothing.
Thus, both ways of implementing such "guards" have different impact in our code and so we should be careful when choosing which approach we use in each case.
As stated at the beginning, defensive programming techniques try to shield code against unexpected [dangerous] situations that might break our logic; the most frequent checks are performed on method parameters or return values after invocations. When choosing how we protect the code, we have to take into account the origin of the "potentially malicious" data:
- if such data comes directly from user input (or any other untrusted external interface), sure our logic should take into account malformed data,
- but if we do the same checks on data coming from another method within our own code, we are hiding the fact that we were passed unproper parameters by our very selves!! and thus hiding our own bugs.
And what if we directly remove our "silently return" 'if' blocks and just rely on the caller meeting the preconditions we specified in the method header? If the method is called from our own code, there should be no problem, as long as we always wrote bug-free code, which for sure is not the case, so the behaviour of our method would be undefined for malformed input data, which means a hypothetical invocation to our method with unproper parameters could lead or not to a noticeable effect.
The following table summarizes the consequences of applying or not defensive programming to our methods:
Source of the "potentially dangerous" data | ||
---|---|---|
External source | Our own code | |
No defensive programming at all | Fragile code against malformed input | Undefined behaviour upon caller bugs |
Silently return | Robust code against malformed input | Hide our own bugs |
Assertions | Our code breaks upon malformed input | Design-by-contract style |
[Ask google about design-by-contract to know more about it.]
One common thing is to disable assertions when a system goes into production, my opinion is that if it's tolerable for the system to crash (sporadically) then they should remain enabled, to detect bugs that escaped development and validation phases.
In the development of mission critical systems, it is not uncommon to enforce the 'silently do nothing' flavour of defensive programming in order to avoid [unexpected] crashes. I have no formed opinion about this topic...what is the most acceptable choice in these cases? I suppose a system crash is unacceptable, but is it to enable silent bugs?
Summing up, we can say defensive programming is a useful tool for chasing software quality and maintainability, but we should be wise when choosing the way we "protect" our code to assure we achieve code robustness without hiding our own bugs.
No comments:
Post a Comment