and sells shares and you are pushing new releases out daily because your company has to stay ahead of the game. You have a test that has been ickering for as long
as you can remember. Somebody once had a look at it, said they couldn't nd any problems with the code, and that the test was just unreliable; this has been accepted and now everybody just does a quick manual check if it goes red. A new cut of code goes in and that test that keeps ickering goes red again. You are used to that test ickering and everything seems to work normally when you perform a quick manual test, so you ignore it. The release goes ahead, but there is a problem; suddenly your trading software starts selling when it should be buying, and buying when it should be selling. It isn't picked up instantly because the software has been through testing and must be good so no problems are expected. An hour later all hell has broken loose, the software has sold all the wrong stock and bought a load of rubbish. In the space
of an hour the company has lost half its value and there is nothing that can be done
to rectify the situation. There is an investigation and it's found that the ickering test wasn't actually ickering this time; it failed for a good reason, one that wasn't instantly obvious when performing a quick manual check. All eyes turn to you; it was you who validated the code that should never have been released and they need somebody to blame; if only that stupid test hadn't been ickering for as long as you can remember...
The preceding scenario is an extreme, but hopefully you get the point: ickering tests are dangerous and something that should not be tolerated.
We ideally want to be in a state where every test failure means that there is an undocumented change to the system. What do we do about undocumented changes? Well, that depends. If we didn't mean to make the change, we revert it. If we did mean to make the change, we update the documentation (our automated tests) to support it.