This scenario is one that can occur in both isolated automation teams and integrated teams where everybody works together. You have probably seen automated tests that are not totally reliable; you know that one ickering test that occasionally fails for no obvious reason. Somebody once had a look at it and said that there was no reason for it to fail so it got ignored; and now, whenever it fails again, somebody says, "Oh, it's that ickering test again, don't worry about it. It will be green again soon".
A ickering test is one that intermittently fails for no obvious reason and then passes when you run it again. There are various phrases used to describe tests like this; you may have heard of them described as aky tests, random failures, unstable tests, or some other name unique to your company.
The thing is that we now have a problem; tests do not icker for no reason. This test is desperately trying to tell you something and you are ignoring it. What is it trying to tell you? Well you can't be sure until you have found out why it is ickering; it could be one of many things. Among the many possibilities a few are:
-
The test is not actually checking what you think it is checking
-
The test may be badly written
-
There may be an intermittent fault in the application that is under test (for example, there may be a race condition nobody has identified yet)
-
Maybe you have some problems with a date/time implementation (it's something that is notoriously hard to get right and the cause of many bugs in many systems)
-
Network problems—is there a proxy getting in the way?
The point is that while your test is ickering we don't know what the problem is,
but don't fool yourself; there is a problem. It's a problem that will at some point come back and bite you if you don't x it.
Comments
0 comments
Please sign in to leave a comment.