.com
Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

Obscure Test

The book has now been published and the content of this chapter has likely changed substanstially.
Please see page 186 of xUnit Test Patterns for the latest information.
Also known as: Long Test, Complex Test, Verbose Test


It is difficult to understand the test at a glance.

Automated tests should serve at least two purposes. First, they should act as documentation of how the system under test (SUT) should behave; we call this Tests as Documentation (see Goals of Test Automation on page X). Second, they should be a self-verifying executable specification. These two goals are often contradictory because the level of detailed needed for tests to be executable may make the test so verbose as to be difficult to understand.

Symptoms

We are having trouble understanding what behavior a test is verifying

Impact

The first issue with an Obscure Test is that it makes the test harder to understand and therefore maintain. It will almost certainly preclude achieving Tests as Documentation and this in turn can lead to High Test Maintenance Cost (page X).

The second issue with an Obscure Test is that it may allow bugs to slip through because of test coding errors hidden in the Obscure Test. This can result in Buggy Tests (page X). Furthermore, a failure of one assertion in an Eager Test may hide many more errors that simply aren't run leading to a loss of test debugging data.

Causes

Paradoxically, Obscure Test can be caused by either too much information in the Test Method (page X) or too little. Mystery Guest is an example of too little information while Eager Test and Irrelevant Information are both examples of too much.

The root cause of Obscure Test is typically a lack of attention to keeping the test code clean and simple. Test code is just as important as the production code and it needs to be refactored just as often. A major contributor Obscure Test is a "just do it inline" mentality when writing tests. Putting code inline results in large, complex Test Methods because some things just take a lot of code to do.

The first few causes of Obscure Test I'll discuss are caused by having the wrong information in the test:

The general problem of Verbose Tests - tests that use too much code to say what they need to say - can be further broken down into a number of root causes:

Cause: Eager Test

The test is verifying too much functionality in a single Test Method.

Symptoms

The test goes on and on verifying this, that and "everything but the kitchen sink." It is hard to tell what part is fixture setUp vs. what part is exercising the SUT.

   public void testFlightMileage_asKm2() throws Exception {
      // setup fixture
      // exercise contructor
      Flight newFlight = new Flight(validFlightNumber);
      // verify constructed object
      assertEquals(validFlightNumber, newFlight.number);
      assertEquals("", newFlight.airlineCode);
      assertNull(newFlight.airline);
      // setup mileage
      newFlight.setMileage(1122);
      // exercise mileage translater
      int actualKilometres = newFlight.getMileageAsKm();    
      // verify results
      int expectedKilometres = 1810;
      assertEquals( expectedKilometres, actualKilometres);
      // now try it with a canceled flight:
      newFlight.cancel();
      try {
         newFlight.getMileageAsKm();
         fail("Expected exception");
      } catch (InvalidRequestException e) {
         assertEquals( "Cannot get cancelled flight mileage", e.getMessage());
      }
   }
Example EagerTest embedded from java/com/xunitpatterns/testtemplates/BadExamples.java

Root Cause

When executing tests manually, it makes sense to chain a number of logically distinct test conditions into a single test case to reduce the setup overhead of each test. This works because we have liveware (an intelligent human being) executing the tests and they can decide at any point whether it makes sense to keep on going or whether the failure of a step is severe enough to cause them to abandon the execution of the test.

Possible Solution

When the tests are automated, it is better to have a suite of independent Single Condition Tests (see Principles of Test Automation on page X) as these provide much better Defect Localization (see Goals of Test Automation).

Cause: Mystery Guest

The test reader is not able to see the cause and effect between fixture and verification logic because part of it is done outside the Test Method.

Symptoms

Tests invariably require passing data to the SUT. The data used in the fixture setup and exercise SUT phases of the Four-Phase Test (page X) define the pre-conditions of the SUT and influence how it should behave. The post-conditions (the expected outcomes) are reflected in the data passed as arguments to the Assertion Methods (page X) in the verify outcome phase of the test.

When either the fixture setup and/or the result verification part of a test depends on information that is not visible within the test and the test reader finds it difficult to understand the behavior that is being verified without first having to find and inspect the external information, we have a Mystery Guest on our hands. Here's an example where we cannot tell what the fixture looks like making it hard to relate the expected outcome to the preconditions of the test.

   public void testGetFlightsByFromAirport_OneOutboundFlight_mg() throws Exception {
      loadAirportsAndFlightsFromFile("test-flights.csv");
      // Exercise System
      List flightsAtOrigin = facade.getFlightsByOriginAirportCode( "YYC");
      // Verify Outcome
      assertEquals( 1, flightsAtOrigin.size());
      FlightDto firstFlight = (FlightDto) flightsAtOrigin.get(0);
      assertEquals( "Calgary", firstFlight.getOriginCity());
   }  
Example MysteryGuest embedded from java/com/clrstream/ex6/services/test/FlightManagementFacadeTest.java

Impact

Mystery Guest makes it hard to see the cause and effect relationship between the test fixture (the pre-conditions of the test) and the expected outcome of the test. Tests don't fulfill the role of Tests as Documentation. Even worse, someone may modify or delete the external resource without realizing the impact this will have when the tests are run. This behavior smell has it's own name: Resource Optimism (see Erratic Test on page X)!

If the Mystery Guest is a Shared Fixture (page X), it may also lead to Erratic Tests if other tests modify it.

Root Cause

A test depends on mysterious external resources making it difficult to understand the behavior that it is verifying. Mystery Guests make take many forms:

All these scenarios share a common outcome: it is hard to see the cause and effect relationship between the test fixture and the expected outcome of the test because the relevant data is not visible in the tests. If the contents of the data is not clearly described by the names we give to the variables/files that contain them, we have a Mystery Guest.

Possible Solution

Using a Fresh Fixture (page X) built using Inline Setup (page X) is the obvious solution for Mystery Guest. When applied to the file example, this would involve creating the contents of the file as a string within our test so that it is visible and then writing it out to the file system (Setup External Resource (page X) refactoring) or putting it into a file system Test Stub (page X) as part of the fixture setup(See Inline Resource (page X) refactoring for details.). To avoid Irrelevant Information, we may want to hide the details of the construction behind one or more evocatively-named Creation Methods (page X) that append to the file's contents.

If we must use a Shared Fixture or Implicit Setup, we should consider using evocatively named Finder Methods (see Test Utility Method on page X) to access the objects in the fixture. If we must use external resources such as files, we should put them into a special folder/directory and give them names that make it obvious what kind of data they hold.

Cause: General Fixture

The test is building or referencing a larger fixture than is needed to verify the functionality in question.

Symptoms

There seems to be a lot of test fixture being built; much more than it would appear to be necessary for any particular test. It is hard to understand the "cause and effect" relationship between the fixture, the part of the SUT being exercised and the expected outcome of a test.

Consider the following set of tests:

   public void testGetFlightsByFromAirport_OneOutboundFlight() throws Exception {
      setupStandardAirportsAndFlights();
      FlightDto outboundFlight = findOneOutboundFlight();
      // Exercise System
      List flightsAtOrigin = facade.getFlightsByOriginAirport(
                     outboundFlight.getOriginAirportId());
      // Verify Outcome
      assertOnly1FlightInDtoList( "Flights at origin", outboundFlight,
                                  flightsAtOrigin);
   }
   
   public void testGetFlightsByFromAirport_TwoOutboundFlights() throws Exception {
      setupStandardAirportsAndFlights();
      FlightDto[] outboundFlights = findTwoOutboundFlightsFromOneAirport();
      // Exercise System
      List flightsAtOrigin = facade.getFlightsByOriginAirport(
                     outboundFlights[0].getOriginAirportId());
      // Verify Outcome
      assertExactly2FlightsInDtoList( "Flights at origin", outboundFlights,
                                      flightsAtOrigin);
   }
Example GeneralFixture embedded from java/com/clrstream/ex6/services/test/FlightManagementFacadeTest.java

From reading the exercise SUT and verify outcome parts of the tests it would appear that they need very different fixtures. Even though these tests are using a Fresh Fixture setup strategy, they are using the same fixture setup logic by calling the setupStandardAirportsAndFlights method. The name of the method is a clue to this classic but easily recognized example of a General Fixture. A more difficult case to diagnose would be if each test created the Standard Fixture (page X) in line or if they each created somewhat different fixtures but each fixture contained much more than was needed by each individual test.

We may also be experiencing Slow Tests (page X) or a Fragile Fixture (see Fragile Test on page X).

Root Cause

The most common cause of this problem is a test that uses a fixture that is designed to support many tests. Examples include the use of Implicit Setup or a Shared Fixture across many tests with different fixture requirements. This results in the fixture become large and hard to understand. It may also become larger and larger over time. The root cause is that both these approaches involving using a Standard Fixture that must meet the requirements of all the tests that use it. The more diverse their needs, the more likely we are to be creating a General Fixture.

Impact

When the test fixture is designed to support many different tests it can be very difficult to understand how each test uses the fixture. This reduces the likelihood of Tests as Documentation and can result in a Fragile Fixture as people make changes to the fixture to handle new tests. It can also result in Slow Tests.

Possible Solution

We need to move to a Minimal Fixture (page X) to address this problem. This can best be done by using a Fresh Fixture for each test. If we must use a Shared Fixture we should consider applying the Make Resource Unique (page X) refactoring to create a virtual Database Sandbox (page X) for each test. (Note that switching to an Immutable Shared Fixture (see Shared Fixture) does not fully address the core of this problem since it does not help us determine which parts of the fixture are needed by each test; only the parts that are modified are so identified!)

Cause: Irrelevant Information

The test is exposing a lot of irrelevant details about the fixture that distract the test reader from what really affects the behavior of the SUT.

Symptoms

As a test reader, we find it hard to determine which of the values passed to objects actually affect the expected outcome:

   public void testAddItemQuantity_severalQuantity_v10(){
      //   Setup Fixture
      Address billingAddress = createAddress( "1222 1st St SW", "Calgary", "Alberta", "T2N 2V2", "Canada");
      Address shippingAddress = createAddress( "1333 1st St SW", "Calgary", "Alberta", "T2N 2V2", "Canada");
      Customer customer = createCustomer( 99, "John", "Doe", new BigDecimal("30"),
                         billingAddress, shippingAddress);
      Product product = createProduct( 88,"SomeWidget",new BigDecimal("19.99"));
      Invoice invoice = createInvoice(customer);
      // Exercise SUT
      invoice.addItemQuantity(product, 5);
      // Verify Outcome
      LineItem expected = new LineItem(invoice, product,5, new BigDecimal("30"), new BigDecimal("69.96"));
      assertContainsExactlyOneLineItem(invoice, expected);
   }
Example IrrelevantInformation embedded from java/com/clrstream/camug/example/test/TestRefactoringExample.java

Fixture setup logic may seem very long and complicated as it weaves together many interrelated objects. This makes it hard to determine what the test is verifying because the reader doesn't understand the pre-conditions of the test:

   public void testGetFlightsByOriginAirport_TwoOutboundFlights() throws Exception {
      FlightDto expectedCalgaryToSanFran = new FlightDto();
      expectedCalgaryToSanFran.setOriginAirportId(calgaryAirportId);
      expectedCalgaryToSanFran.setOriginCity(CALGARY_CITY);
      expectedCalgaryToSanFran.setDestinationAirportId(sanFranAirportId);
      expectedCalgaryToSanFran.setDestinationCity(SAN_FRAN_CITY);
      expectedCalgaryToSanFran.setFlightNumber(
         facade.createFlight(calgaryAirportId,sanFranAirportId));
      FlightDto expectedCalgaryToVan = new FlightDto();
      expectedCalgaryToVan.setOriginAirportId(calgaryAirportId);
      expectedCalgaryToVan.setOriginCity(CALGARY_CITY);
      expectedCalgaryToVan.setDestinationAirportId(vancouverAirportId);
      expectedCalgaryToVan.setDestinationCity(VANCOUVER_CITY);
      expectedCalgaryToVan.setFlightNumber(facade.createFlight(
            calgaryAirportId, vancouverAirportId));
Example ObscureSetup embedded from java/com/clrstream/ex5/test/FlightManagementFacadeTest.java

The code that verifies the expected outcome of a test can also be too complicated to understand:

      List lineItems = inv.getLineItems();
      assertEquals("number of items", lineItems.size(), 2);
      //   verify first item
      LineItem actual = (LineItem)lineItems.get(0);
      assertEquals(expItem1.getInv(), actual.getInv());
      assertEquals(expItem1.getProd(), actual.getProd());
      assertEquals(expItem1.getQuantity(), actual.getQuantity());
      //   verify second item
      actual = (LineItem)lineItems.get(1);
      assertEquals(expItem2.getInv(), actual.getInv());
      assertEquals(expItem2.getProd(), actual.getProd());
      assertEquals(expItem2.getQuantity(), actual.getQuantity());
   }
Example ObscureVerification embedded from java/com/clrstream/camug/example/test/InvoiceTest.java

Root Cause

A test contains a lot of data, either as Literal Values (page X) or as variables. Irrelevant Information often occurs in conjunction with Hard-Coded Test Data or General Fixture but it can also occur because we make visible all the data the test needs to execute rather than focusing on the data the test needs to be understood. When writing tests, the path of least resistance is to use whatever methods are available (on the SUT and other objects) and to fill in all the parameters with values whether or not they are relevant to the test.

Another cause is when we include all the code needed to verify the outcome using Procedural State Verification (see State Verification on page X) rather than using a much more compact "declarative" style of specifying the expected outcome.

Impact

It is hard to achieve Tests as Documentation if the tests contain many seemingly random bits of Obscure Test that don't clearly link the pre-conditions with the post-conditions. Likewise, wading through many steps of fixture setup or result verification logic can result in High Test Maintenance Cost and can increase the probability of Production Bugs (page X) or Buggy Tests.

Possible Solution

The best way to get rid of Irrelevant Information in fixture setup logic is to replace direct calls to constructor or Factory Methods[GOF] with calls to Parameterized Creation Methods (see Creation Method) that take only the relevant information as parameters. Fixture values that do not matter to the test (those which do not affect the expected outcome) should be defaulted within Creation Methods or replaced by Dummy Objects (page X). In this way we say to the test reader "the values you don't see don't affect the expected outcome". We can replace fixture values that appear in both the fixture setup and outcome verification parts of the test with suitably initialized named constants as long as we are using a Fresh Fixture approach to fixture setup.

Irrelevant Information in result verification logic can be hidden by using assertions on entire Expected Objects (see State Verification) rather than asserting on individual fields and by creating Custom Assertions (page X) that hide complex procedural verification logic.

Cause: Hard-Coded Test Data

Data values in the fixture, assertions or arguments of the SUT are hard-coded in the Test Method obscuring cause-effect relationships between inputs and expected outputs.

Symptoms

As a test reader, we find it hard to determine how various hard-coded (i.e. literal) values in the test are related to each other and which ones should affect the behavior of the SUT. We may also be seeing behavior smells such as Erratic Tests.

   public void testAddItemQuantity_severalQuantity_v12(){
      //  Setup Fixture
      Customer cust = createACustomer(new BigDecimal("30"));
      Product prod = createAProduct(new BigDecimal("19.99"));
      Invoice invoice = createInvoice(cust);
      // Exercise SUT
      invoice.addItemQuantity(prod, 5);
      // Verify Outcome
      LineItem expected = new LineItem(invoice, prod, 5,
            new BigDecimal("30"), new BigDecimal("69.96"));
      assertContainsExactlyOneLineItem(invoice, expected);
   }
Example CamugExampleDependantObjectOmission1 embedded from java/com/clrstream/camug/example/test/TestRefactoringExample.java

This specific example isn't so bad because there aren't very many literal values but if we aren't good at doing math in our head, we might still miss the relationship between the unit price (19.99), the item quantity (5), the discount (30%) and the total price (69.96.)

Root Cause

Hard-Coded Test Data occurs when a test contains a lot of seemingly unrelated Literal Values. Tests invariably require passing data to the SUT. The data used in the fixture setup and exercise SUT phases of the Four-Phase Test define the pre-conditions of the SUT and influence how it should behave. The post-conditions (the expected outcomes) are reflected in the data passed as arguments to the Assertion Methods in the verify outcome phase of the test.When writing tests, the path of least resistance is to use whatever methods are available (on the SUT and other objects) and to fill in all the parameters with values whether or not they are relevant to the test.

When we use "cut & paste" reuse of test logic, we find ourselves replicating all the literal values to all the derivative tests.

Impact

It is hard to achieve Tests as Documentation if the tests contain many seemingly random bits of Obscure Test that don't clearly link the pre-conditions with the post-conditions. A few literal parameters may not seem like a bad thing. They don't add that much effort to understanding a test. But as the number of literals grows it can be become much more difficult to understand a test. This is especially true when the signal to noise ratio becomes very low because a majority of the values are irrelevant to the test.

The second major impact occurs when we start getting collisions between tests because they are using the same values. This only happens when we use a Shared Fixture because a Fresh Fixture strategy shouldn't leave any objects around with which a subsequent test can collide.

Possible Solution

The best way to get rid of Obscure Test is to replace literal constants with something else. Fixture values that determine which scenario is being executed (e.g. type codes) are probably the only ones that are reasonable to leave as literals but even these can be converted to named constants.

Fixture values that do not matter to the test (those which do not affect the expected outcome) should be defaulted within Creation Methods. In this way we say to the test reader "the values you don't see don't affect the expected outcome". We can replace fixture values that appear in both the fixture setup and outcome verification parts of the test with suitably initialized named constants as long as we are using a Fresh Fixture approach to fixture setup.

Values in the result verification logic that are based on values used in the fixture or as arguments of the SUT should be replaced with Derived Values (page X) to make that calculations obvious to the test reader.

If we are using any variant of Shared Fixture, we should try to use Distinct Generated Values (see Generated Value on page X) to ensure that each time a test is run it uses a different value. This is especially important for fields that are used as unique keys in databases. A common way of encapsulating this logic is through the use of Anonymous Creation Methods (see Creation Method).

Cause: Indirect Testing

The Test Method is interacting with the SUT indirectly via another object thereby making the interactions more complex.

Symptoms

A test interacts primarily with objects other than the one whose behavior it purports to verify. The test has to construct and interact with objects that contain references to the SUT rather than the SUT itself. Testing business logic through the presentation layer is a common example of Indirect Testing

   private final int LEGAL_CONN_MINS_SAME = 30;
   public void testAnalyze_sameAirline_LessThanConnectionLimit()
   throws Exception {
      // setup
      FlightConnection illegalConn = createSameAirlineConn( LEGAL_CONN_MINS_SAME - 1);
      // exercise
      FlightConnectionAnalyzerImpl sut = new FlightConnectionAnalyzerImpl();
      String actualHtml = sut.getFlightConnectionAsHtmlFragment( illegalConn.getInboundFlightNumber(),
                       illegalConn.getOutboundFlightNumber());
      // verification
      StringBuffer expected = new StringBuffer();
      expected.append("<span class=”boldRedText”>");
      expected.append("Connection time between flight ");
      expected.append(illegalConn.getInboundFlightNumber());
      expected.append(" and flight ");
      expected.append(illegalConn.getOutboundFlightNumber());
      expected.append(" is ");
      expected.append(illegalConn.getActualConnectionTime());
      expected.append(" minutes.</span>");
      assertEquals("html", expected.toString(), actualHtml);
   }
Example IndirectTest embedded from java/com/clrstream/ex10/student/test/FlightConnectionAnalyzerTest.java

Impact

It may not be possible to test "anything that could possibly break" in the SUT via the intermediate object and the tests are unlikely to be very clear or understandable. They won't result in Tests as Documentation.

Indirect Testing may result in Fragile Tests because changes in the intermediate objects may result in the tests needing to be modified.

Root Cause

The SUT may be "private" to the class being used to access it from the test. It may not be possible to create the SUT directly due to constructors being private. This is just one sign that the software is not designed for testability.

It may be that the actual outcome of exercising the SUT cannot be observed directly so the expected outcome of the test must verified through an intermediate object.

Possible Solution

It may be necessary to improve the design-for-testability of the SUT to remove this smell. We might be able to expose the SUT directly to the test by using an Extract Component refactoring (a variant of the Extract Class[Fowler] refactoring.) This may result in an untestable Humble Object (page X) and an easily tested object that contains most or all of the actual logic.

   public void testAnalyze_sameAirline_EqualsConnectionLimit()
   throws Exception {
      // setup
      Mock flightMgntStub = mock(FlightManagementFacade.class);
      Flight firstFlight = createFlight();
      Flight secondFlight = createConnectingFlight( firstFlight, LEGAL_CONN_MINS_SAME);
      flightMgntStub.expects(once()).method("getFlight")
                     .with(eq(firstFlight.getFlightNumber()))
                     .will(returnValue(firstFlight));
      flightMgntStub.expects(once()).method("getFlight")
                     .with(eq(secondFlight.getFlightNumber()))
                     .will(returnValue(secondFlight));
      // exercise
      FlightConnAnalyzer theConnectionAnalyzer = new FlightConnAnalyzer();
      theConnectionAnalyzer.facade =  (FlightManagementFacade)flightMgntStub.proxy();
      FlightConnection actualConnection = theConnectionAnalyzer.getConn( firstFlight.getFlightNumber(),
                              secondFlight.getFlightNumber());
      // verification
      assertNotNull("actual connection", actualConnection);
      assertTrue("IsLegal", actualConnection.isLegal());
   }
Example BusinessLayerTest embedded from java/com/clrstream/ex10/solution/test/FlightConnectionAnalyzerTest.java

Sometimes we are forced to interact indirectly because we cannot refactor the code to expose the logic we are trying to test. In these cases we should encapsulate the complex logic forced by Indirect Testing behind suitably named Test Utility Method methods. Fixture set up can be hidden behind Creation Methods and result verification can be hidden by Verification Methods (see Custom Assertion). These are both examples of SUT API Encapsulation (see Test Utility Method).

   public void testAnalyze_sameAirline_LessThanConnLimit()
   throws Exception {
      // setup
      FlightConnection illegalConn = createSameAirlineConn( LEGAL_CONN_MINS_SAME - 1);
      FlightConnectionAnalyzerImpl sut = new FlightConnectionAnalyzerImpl();
      // exercise SUT
      String actualHtml = sut.getFlightConnectionAsHtmlFragment( illegalConn.getInboundFlightNumber(),
                       illegalConn.getOutboundFlightNumber());
      // verification
      assertConnectionIsIllegal(illegalConn, actualHtml);
   }
Example AbstractedIndirectTest embedded from java/com/clrstream/ex10/student/test/FlightConnectionAnalyzerTest.java

The following Custom Assertion hides the ugliness of extracting the business result from the presentation noise. It was created by doing a simple Extract Method[Fowler] refactoring on the test but it would be more robust if it search inside the HTML for key strings rather than building up the whole expected string and comparing it all at once. We would probably have other Presentation Layer Tests (see Layer Test on page X) that verify that the presentation logic is formatting the HTML string properly.

   private void assertConnectionIsIllegal( FlightConnection conn, String actualHtml) {
      // set up expected value
      StringBuffer expected = new StringBuffer();
      expected.append("<span class=”boldRedText”>");
      expected.append("Connection time between flight ");
      expected.append(conn.getInboundFlightNumber());
      expected.append(" and flight ");
      expected.append(conn.getOutboundFlightNumber());
      expected.append(" is ");
      expected.append(conn.getActualConnectionTime());
      expected.append(" minutes.</span>");
      // verification
      assertEquals("html", expected.toString(), actualHtml);
   }
Example AbstractedIndirectTestUtility embedded from java/com/clrstream/ex10/student/test/FlightConnectionAnalyzerTest.java

Solution Patterns

A good test strategy helps keep the test code understandable but just as "no battle plan survives the first contact with the enemy" no test infrastructure can anticipate all the needs of all the tests. We have to expect to evolve the test infrastructure as the software matures and our test automation skills improve.

Reuse of test logic for several scenarios should be done by having several tests call Test Utility Methods or a common Parameterized Test (page X) passing in the already built test fixture and/or Expected Objects.

Writing tests in an "outside-in" way can help avoid producing Obscure Test that would then need to be refactored. This approach starts by outlining the Four-Phase Test using calls to non-existent Test Utility Methods. Once we are satisfied with the tests, we can start writing the utililty methods needed to run them. By writing the tests first, we get a bettern understanding of what the utility methods need to do for us to make writing the tests as simple as possible. The "test-infected" will, of course, write Test Utility Tests (see Test Utility Method) before writing the Utility Methods.



Page generated at Wed Feb 09 16:39:51 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Glossary
Misc
References
Result Verification Patterns
Sidebars
Terminology
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Tools
Value Patterns
XUnit Basics
xUnit Members
All "Test Smells"
Code Smells
--Obscure Test
----Long Test
----Complex Test
----Verbose Test
----Mystery Guest
----General Fixture
----Irrelevant Information
----Hard-Coded Test Data
----Indirect Testing
--Conditional Test Logic
--Hard-to-Test Code
--Test Code Duplication
--Test Logic in Production
Behavior Smells
--Assertion Roulette
--Erratic Test
--Fragile Test
--Frequent Debugging
--Manual Intervention
--Slow Tests
Project Smells
--Buggy Tests
--Developers Not Writing Tests
--High Test Maintenance Cost
--Production Bugs