An important challenge in creating automated tests is how to design test fixtures, i.e., the setup code that initializes the system under test before actual automated testing can start. Test designers have to choose between different approaches for the setup, trading off maintenance overhead with slow test execution.
Over time, test code quality can erode and test smells can develop, such as the occurrence of overly general fixtures, obscure inline code and dead fields. In this paper, we investigate how fixture-related test smells evolve over time by analyzing several thousand revisions of five open source systems (Voldemort, PMD, Azure Java SDK, Jsoup, Checkstyle). Due to Lehman’s law that says that when a system evolves, its complexity increases unless work is done to maintain or reduce it, we expected test fixture smell density to continually increase over time.
In contrast, our findings indicate a general growth of test fixture smells over time does not occur. There are fluctuations (increases and decreases) over time, and a manual investigation of strong fluctuations in smell trends revealed that those are often caused by structural changes, such as adding or removing features and refactoring. In general, we see that setup management strategies strongly influence the types of test fixture smells that emerge in code, and that several types of fixture smells often emerge at the same time.
Test fixture smells seldom get resolved, but developers learn over time what appropriate fixture strategies are and less test fixture smells are introduced. Further, we correlated test fixture smell density with test methods in a class, and the statistic indicates that classes with a higher number of test methods in a class are have a higher smell density. The insights we gained from this evolutional study (all details are in the paper) resulted in the following recommendations to avoid test fixture smells:
Strategies and Recommendations.
- Keep test classes small (and therefore coherent), by reducing the number of test methods within a class.
- Keep inheritance structures flat, and also limit the scope of super classes (e.g., do not implement one super test class that provides functionality for all base test classes).
- Use composition instead of inheritance to provide test classes with helper functionality.
- Create data classes that provide test inputs to avoid overloading test classes with field declarations that are only used for a single test method. This improves performance and understandability of test classes.
- Be aware that declaring fields in the header may impact tests in terms of understandability and test performance.
- Reconsider the “one test class per class” organization in case test methods within a single test class require diverse states and configurations for the system under test.
- Carefully consider the consequences of the chosen fixture strategies and the impact those decisions will have on the projects, as this is not likely to change.
More details can be found in the paper or the presentation: