There are many reasons why automated tests failed. Ideally, we want the failed cases to be irregularities detected in our system. However, sometimes they are just false alarms and a result of non-issue causes.
- An uncertain test environment
A ﬂaky test is an analysis of web application code that fails to produce the same result each time the same analysis is run. Often time it is a result of a test environment that is modified and used by many people in the development team. To avoid this, making sure your test set up is always getting the environment ready is the key.
Tips: Doing a test setup by clearing unnecessary data and preparing test data with API before your test case is run.
- Time order dependencies
One of the key success factors of smooth automated testing is that all tests in a test suite should be independent of one another and the order in which they are run should not affect their outcomes.
Tips: Make sure each case has its own test setup and is independent of other cases. In case of unavoidable wait, simply find a way to separate the time-sensitive scenario out from other flows.
- Same data when running parallel test suites
Sometimes we find all test cases successfully run on our local machine yet failing on the CI system. One of the most common reasons is that we tend to use the same data like user account for different test suites. This works on our local machine because we do not run the test suits continuously at the same time when we test them. On the CI system, these scenarios are run at the same time, and sometimes when the same user account is used, certain behavior is crashing with one another.
Tips: Similarily to topic 2, make sure each test suite has its own test user and is independent of other suites.
- Your test case is too big and too complex.
A long test case might save you some setup time but sometimes they become unnecessarily long and results in unclear test scenarios and worse, preventing certain flow to be tested.
Tips: Only design relevant tests to be in the same flow. If they become too long, break it down. Review test cases with your Business Analyst or Product Owner to get a new perspective on your test flows.
- Change in requirements
As obvious as it sounds, many test runs fail from changes in software behavior. Make sure to improvise your test flow every sprint so the tests can be up to date.
Tips: For every planning session, evaluate the change and impact that the new stories will have on your system and prepare to change accordingly.
Avoiding flaky falsely failed test cases is one of the challenges software development teams face. Key success factors are making sure test flows are independent of each other and that the scenarios reflect the real use cases. Reviewing test cases with everyone in the team can help eliminate misunderstanding and improve the quality of your product.