One of the more common reasons that tests are not reliable is having a dedicated test automation team who work in isolation from the team that develops the application. This should really be avoided if possible as the test automation team is always playing catch-up. The development team rolls out new features that the test automation teams have to automate, but they are never sure what is coming next. They usually nd out that existing features have changed when their tests break and as well as xing them they need to work out what the new functionality is and whether it is behaving as expected.
Something that normally happens in situations like these is that the test manager realizes that they don't have enough time to do everything and they look for ways to reduce the workload. Do you x the existing failing tests and not automate new tests, instead getting some manual regression scripts put together to cover the gap? Do you continue automating the new functionality and accept that some of your old tests will fail? If you do, how do you deal with failing tests? Maybe put aside some time to x failing tests and accept that your tests will never be green?
This is where you usually start to hear suggestions that it is time to lower the bar for the automated tests. "It should be ne as long as 95 percent of the automated tests pass; we know we have high coverage and those failing 5 percent are probably due to changes to the system that we haven't yet had the time to deal with." Everybody
is happy at rst; they continue to automate things and make sure that 95 percent of tests are always passing. Soon though the pass mark starts to dip below 95 percent. A couple of weeks later a pragmatic decision is taken to lower the bar to 90 percent, then 85 percent, then 80 percent. Before you know it tests are failing all over the place and you have no idea which failures are legitimate problems with the application, which ones are expected failures, and which ones are intermittent failures.
When tests go red nobody really pays attention any more, they just talk about that magic 80-percent line: It's a high number—we must have a decent product if that many tests are still passing, right? If things dip below that line we massage a few failing tests and make them pass, usually the low hanging fruit because we don't have time to spend trying to tackle the really thorny issues.
Comments
0 comments
Please sign in to leave a comment.