A lot of bad practices exist in software development. When focusing on testing, specifically unit and integration testing, there are still plenty of them. This discusses a few bad practices regarding logging. I am providing 6 rules for effective and useful logging in your tests, plus a library that can help with that.

Disclaimers go here. I am a Java developer at heart, so remarks here are grounded in that culture, and may not translate to other language systems. When I talk about unit tests, I typically mean everything that reasonably runs in JUnit, hence I will be using "unit and integration tests", going forward. I am trying not to be a purist on that, it is a topic for another day.

Within the grand scheme of things, this is about a small problem. If you are not writing any tests, you also do not have this problem (you then have bigger problems, though, go write some tests). However, I do not think it is a frivolous issue, as it touches on readability and maintainability of your tests and their output.

So, here I am giving you my self-imposed rules for logging in unit and integration tests, plus some tools to help:

Rules for Effective Logging

When I arrive at existing software projects, things tend to be a mess, when looked at it with my detailed-oriented disposition; disarray. If there are working unit and integration tests at all, they run to produce many thousandths of lines of log messages. Let's fix that.

Rule 1. Stop logging to stdout

People still do this. If you are logging to stdout, your operations folk will not love you, simple. (You want to be loved, no?) Reconsider all System.out.println statements, remove them, or consider making these INFO logs instead, using a logging provider (then Rule 2 applies).

Rule 2. Silence logs at level INFO or below

The detailed logs of the application are a distraction when looking at build logs. And that is only if they do not make the build logs prohibitively large (crashing your browser, or being truncated by the CI server). You can just selectively crank up the logging when needed to troubleshoot a failing test. I contend that most INFO logging may not even be terribly useful when troubleshooting specific issues, since these are typically designed for happy flow scenarios. The DEBUG logging left behind by others (or former versions of yourself) is typically aimed at a specific issue, different than your current issue, thus, well, useless too. Silence it. Make a separate logback.xml (or whatever your logging provider is) on the test class-path, to print only WARN logs or above by default.

By their nature, WARN and ERROR logs often include an exception:

catch(IOException exception) {
log.warn("unable to download report", exception);
}

Arguably, there is something wrong with the exception handling in this example: why are exceptions swallowed with no consequence? Maybe you want to rather throw a RuntimeException or some custom ReportDownloadException, with the IOException as its cause. In that case:

Rule 3. Do not log errors for which you are still throwing an exception

You will end up with the same problem being logged more than once, and being terribly confusing to troubleshoot. (Again: operations folk will not love you.)

But sometimes the specified business logic demands that you catch an exception, log it, and move on. If this is a common occurrence, you may use an INFO log, or no log at all. But otherwise a WARN or ERROR log may be appropriate, and adding the "cause" exception to the log statement is a good way to add context to the WARN or ERROR log. However, you will end up with a stack trace in your test output.

Rule 4. Absorb known WARN and ERROR logs and their stack traces in test output

These stack traces are red flags to people casually looking at the build logs. If you are writing good tests, you will hit the code that causes the WARN and ERROR logs and their stack traces to happen. Good for you! But then make sure to mute the log momentarily for that statement. And you may even choose to verify that these happened:

Rule 5. Verify log output that you care about

I wish to be pragmatic about testing, and about testing the occurrence of log statements, and I do not write tests for all log statements. (The thought alone!) But certain log statements may be essential to your operation; perhaps your operations folk have chosen to scrape the log and key important notifications off of particular log statements. Then you better have confidence through testing.

Often, though, the opposite is true: you are absorbing known WARN and ERROR logs (Rule 4), and want to make sure there are no other, undesired WARN or ERROR logs being missed. Arguably, you would rather test the real side-effects that you expect to happen/not happen, but testing logs instead can sometimes be practical, cheaper.

Rule 6. Tests do not log

Your tests do not need to log anything. If you feel the need to log progress in your tests, maybe these tests are too large, trying to do too many things. (Tests should be as to the point as possible, ideally verifying only a single detail or concern per test. The name of the test should then be expressive of what scenarios are tested.) Otherwise, maybe your assertions are not clear enough. I cannot remember a single time where I deemed it useful for a test to log its own status or progress. And in a sense it would violate Rule 2.

Note

One more point around the absorbing of WARN and ERROR logs. If you are writing simple unit tests, you can just catch the exception in the test code and avoid the stack trace, right? However, even if you abide by Rule 3, you may still have places with WARN or ERROR logs, and where no exceptions are thrown to be caught. A more common scenario I am seeing is integration testing a (REST) API, where the test is intentionally causing a "server-side" error. I verify that the server responds with the expected HTTP 404 status code, or (if need be) HTTP 500, or whatever the expected behavior is. But meanwhile I am still stuck with the "server-side" error; so: absorb it (Rule 4).

Tools for Logging in Tests

To the extent that these rules are not just practices you have to be on top of, yourself, here are some tools to help with these rules:

Rule

Tool

Rule 2

The configuration file of whatever your logging provider is; logback.xml for Logback, log4j.properties or log4j.xml for Log4j.

Rule 4

Use the LogMuter class or the MutedLogAsserter class of the Testlog library (see below) to absorb known logs.

Rule 5

Use the LogAsserter class or the MutedLogAsserter class of the Testlog library (see below) to verify log output that you care about.

Testlog Library

This is a shameless plug for a library I wrote, but it is open source and altruistic, so you will forgive me. You can add the Testlog library to your test class-path and use it in test classes to either mute log output, or to assert on log output above a certain level, and to specify exceptions for expected (WARN and ERROR) logs when asserting unexpected ones.

You can use it "wholesale" (for entire classes), or more selectively (where you would wrap just the statements that you expect to cause (WARN or ERROR) logs).

There are plenty of examples of using Testlog on the GitHub page, but here is a good example that uses the MutedLogAsserter class (my favorite):

public class MyTest {
private MutedLogAsserter mutedLogAsserter
= MutedLogAsserter.setUpMutedLogAsserter(Level.WARN); // (1)

@Test
public void testSomething() {
try (ExpectedLogs ignored = mutedLogAsserter.expect(
Level.ERROR, Level.ERROR, Level.WARN)) { // (2)

...test something... expectations are observed... // (3)

} // asserts and resets here // (4)

...test some more... no more expectations...

}

@After
public void tearDown() {
mutedLogAsserter.tearDown(); // (5)
}

This code does the following:

  • (1) Create a MutedLogAsserter instance on the class level. It mutes all logs, and it asserts any logs of level WARN or above. (Thanks to the way JUnit works, it will be re-initialized before every test.) You could add this to a base class that is commonly reused for integration tests, if you have such base class.
  • (2) Wrap the "test something" code block (3) with expectations. This ensures that the expected logs during (3) are not asserted, and that at the end of the code block (4) it is verified that the expected logs have occurred.
  • (5) Tear down the instance after every test. If there were any unsatisfied expectations, it would assert these here (although that would probably not be the case in this particular example, due to using the code block (3) with expectations).

A more concrete example:

   @Test
public void testGetListForBadCategory() {
try (ExpectedLogs ignored = expectLogs(Level.WARN)) {
with().queryParam("category", "bananas")
.expect().statusCode(HttpStatus.SC_BAD_REQUEST)
.and().body("message", is("Validation error: bananas is not a valid category, use either public or private"))
.when().get(path("list"));
}
}

Conclusion

Please be a good steward of logging in your unit and integration tests. Tests should be silent, without becoming ignorant for ERROR and WARN logs. The Testlog library may be of use. Feel free to let me know your thoughts.