Changing the rules changes people’s attitudes
There is a 1998 experiment that keeps popping up in the new “freakonomics”-type literature (e.g. Economics 2.0 ), where a controlled subset of Israeli day-care centers started charging a fine for parents who came late to pick up their children. To everyone’s surprise, the number of people showing up late almost doubled. Additionally, when the fine was later dropped, the number of late parents stayed at the high level. It is theorized that the moral responsibility parents felt to be on time was much stronger than the economic cost of paying a fine, which was rationalized by the parents as a fee, thus removing the stigma of being late. The fee made it “just business”. As Professor Michael Sandel summarized…
“So what happened? Introducing the fine changed the norms. Before, parents who came late felt guilty; they were imposing an inconvenience on the teachers. Now parents considered a late arrival a service for which they were willing to pay. Rather than imposing on the teacher, they were simply paying her to stay longer. Part of the problem here is that the parents treated the fine as a fee. It’s worth pondering the distinction. Fines register moral disapproval, whereas fees are simply prices that imply no moral judgement.”Don’t make “not my job” “just business”
The blind experiment is a well established doctrine in science requiring that people should not know too much about something they are testing, otherwise the results are often biased. Scientists gathering their own raw data (not to mention interpreting their own data) often get the results they expected to get, where objective outsiders don’t. With that theory, it is argued that software development projects should engage external, objective, “QA testers” to develop and administer test suites against the code produced by the “programmers”. Since many programmers don’t like to eat their spinach, ahem, write their own tests (or documentation for that matter), there are not usually arguments against the idea.
From my experience though…
- Programmers will suffer peer pressure and social costs if they fail their own tests (which is a good thing).
- Programmers who understand that they are obligated to deliver testable components, will do so more often if they must actually produce the tests themselves, compared to those where testing is “not my job”.
- The act of writing a test forces a clearer understanding of both the interface and the implementation of the tested component compared to just programming it.
- Programmers who fail their own tests will be much more likely to change that component’s implementation if needed, rather than obstinately maintaining that the externally-produced test is wrong.
- Writing your own tests is the most systematic method of “eating your own dogfood”
I say, both independent testers AND the original programmers should each write independent test suites. This way you get the power of both perspectives. This is an old lesson from the days of computer punch-cards; two different people key-punch the same data so that they can later be compared, thus eliminating most typos.
 A Fine is a Price, URI GNEEZY and ALDO RUSTICHINI, Journal of Legal Studies, vol. XXIX ( January 2000)
 Brain food: when does a fine become a fee?, Aditya Chakrabortty, The Guardian, Tuesday 23 February 2010
 Why an L.A. Times wikitorial effort went wrong, Clay Shirky, O'Reilly Media Gov 2.0 Summit, 2009-09-09
 pg 7, Economics 2.0, Norbert Haring, Olaf Storbeck, Palgrave Macmillan, 2009
 Michael Sandel, The Reith Lectures 2009, BBC, 9th JUNE 2009