Today, we’re going to look at unit testing in general, and then focus on what makes good and bad test inputs. There are two main approaches to unit testing: black-box testing, which focuses on the way the code would be used, and white-box testing, which focuses on the way the code functions internally.
For black-box testing, one can generally work from the specification of the method under test and choose inputs based on this information. There are several complementary strategies to choose such inputs; to illustrate these strategies, let us consider three specifications of methods:
Our goal is to choose good testing values based on the information we have about each of these parameters. There are several strategies we can use; they include using typical input values, values outside the specified range, and corner cases.
Using Typical Values
One good strategy of picking black-box testing values is to use typical values. Here, “typical” is to be understood with respect to the specification: one may need to take the description itself, any implied information and, if that helps, any additional information like parameter names into account. To illustrate this, let us look at our examples.
For “age”, one could reasonably choose a value between 1 and 100 (since human ages tend to fall into that range). That being said, what a “typical” age is would depend on the application that is being developed: since it manages information about school children, one should pick a number like, say, 12 instead of 45.
An additional observation for this method is that there are likely various cases to consider (for instance, the details of parental consent for a six-year-old would be different from that for a 16-year old). For good testing, one should have a test for each case; this is generally known as equivalence class partitioning.
For “email”, something like “firstname.lastname@example.org” would work well. That being said, note that the specification says “valid e-mail address.” Does this mean that the address must be syntactically valid, or that it actually exists? We will come back to these points in the next sections.
Another point to consider in this case is that the method under test seems to be operating on an underlying distribution list object that may or may not contain certain e-mail addresses: should we choose an e-mail address that is contained in the list, or not? In fact, this would be another case where equivalence class partitioning suggests that we should have two tests: one with an e-mail address that is contained in the distribution list, and one where it is not contained in the list.
For “inputFiles”, the specification says that the list should contain file names. But we have some additional information: the parameter name implies that the files are used as inputs, which means that we will likely expect the file name to refer to actual, existing files. The specification states that the files are JSON files of some unspecified structure (which is likely described somewhere else-maybe in the class documentation?).
One way of producing this would be to create three temporary files and use their names. Of course, for a good test, these files should probably not be empty-but from the specification, we only know that these should be JSON files. Lacking any further information, one option would be to write syntactically valid JSON. Another (better) option would be to use existing example inputs, if they exist, or to use existing code that produces these files to create some examples. If all else fails, example inputs can usually be constructed by hand from the specification of the file format.
Take Context into Consideration
To sum up, in black-box testing, one way to produce good test inputs is to select typical input values, where the definition of what’s typical can be derived from any specifications available. The best values can be selected by taking as much context as possible into consideration. Often, it is useful to use multiple tests that test various different classes of inputs.
If you don’t want to write your own tests, however, you can try Diffblue Cover, an AI-powered tool that automatically generates unit tests for Java code.