This exercise will help you practice and demonstrate your ability to write code that is testable, to apply simple test suite adequacy measures, and to extract greater value from test sutes. Note that we will revisit testing again in a future exercise that requires you to build tools that can assist in the testing process. Unlike previous exercises, this exercise will not require you to submit (much) valid code. Instead, you will be reasoning about programs and test suites. For tasks involving code, you can upload solutions through CourSys. For tasks involving questions, you can submit your answers via CourSys text forms.
You can find a template for this exercise
here.
Building the C++ component of exercise requires clang, which is already
installed in CSIL. From a clean build directory, you can configure and build
the C++ component with:
1
2
cmake -DCMAKE_CXX_COMPILER=clang++ <path/to/C++/source>
make
The Java component can again be built and tested with:
1
mvn test
Look at the C++ code for Cat methods mightExplode(), canRest(), and willRun()
in lib/felinity/felinity.cpp. The code contains several conditional
behaviors that may be prone to bugs. In this task, you will explore how MC/DC
testing behaves on this code. The exercise template is configured to already
extract the MC/DC coverage for you using clang and llvm-cov. You can
run the tests and collect MC/DC coverage by running
1
make cat-conditions-mcdc
This will create coverage/cat-conditions/index.html that you can open in a
web browser to see the MC/DC coverage of the felinity library.
Test cases for the library are in test/cat/conditions.cpp.
You will need to both modify and analyze the tests inside this file for the
pieces of this task.
Look at the method mightExplode().
You can find a test suite for this method in the provided tests,
but it has no tests.
Add test cases with the smallest number of invocations of mightExplode() that
provide MC/DC coverage for this code.
Your test suite should pass.
You may assume the current implementation is correct.
Reflect on whether this test suite is good or bad based on our discussion in class.
Look at the method canRest().
You can find a test suite for this method in the provided tests,
but it does not have full MC/DC coverage.
Write additional tests in the given test suite so that it has full MC/DC coverage.
Look at the set of methods willRunX(). Each of these provides an
equivalent implementation of the same behavior.
Look at the MC/DC coverage reports for these methods.
What conclusions can you draw from the differences in source code and/or
coverage for these different implementations?
Submit your answer to this question via the text entry in CourSys.
Recall that we discussed testing as sampling from the input space of a program to build confidence that a program can meet its requirements. In order for sampling to build confidence, it helps to have many samples, but many tests are written to check whether a specific single input yields a specific single outcome. To gain better leverage, we can write tests that check many samples or even reason distributions of inputs. This is the goal of parameterized unit testing and property based testing.
Consider the Java code for an interval in src/main/java/ca/sfu/cmpt745/ex03/Interval.java.
In this task you will add tests for its overlaps() and intersects() methods.
The provided implementation has a bug that will produce incorrect results.
Your goal will be to write tests that can check the behaviors of many inputs.
One of the key differences that developers note when writing tests this way is that they no longer encode concrete values in their test assertions. Because the tests are written to work for many different inputs, they tend to focus on invariants, relationships, and other properties that should hold from a specification. For instance, you might consider:
The idea is that the conditions/properties of correctness are more general than pairing a specific outcome with a specific input. Where a function might have different properties in different cases, inputs can be selected, generated, or filtered to only the inputs where a property applies as a part of running a test.
Look at the overlaps() method. You will write parameterized unit tests
for this method so that you can write a small number of tests and pass more
data into those tests. The particular approach will use the built in
JUnit parameterized unit tests.
Look at the example paramaterized unit tests in IntervalTests.java for
contains(). They are identified using the @ParameterizedTest annotation.
Unlike classic unit tests, these tests take arguments controlling the test
case that will be executed. The values for these arguments must be sourced
from somewhere like a static field, a method invocation, or a standard distribution.
Because Interval objects are custom object types, the example tests use
a method returning a stream/sequence of test cases containing the intended
arguments.
The method returning test cases is chosen with the @MethodSource annotation.
Complete your parameterized unit tests for overlaps() in the noted section.
Your tests should fail if the implementation were incorrect.
Look at the intersects() method. You will now write property based tests (PBTs)
for this method. Property based tests extend parametrized unit tests by enabling
(1) automated data generation for more extensive sampling and
(2) automated simplification of complex test cases that fail.
This particular approach will use jqwik for PBT.
Look at the example PBTs in IntervalTests.java for contains().
They are identified by the @Property annotation, but instead of referring
to a fixed source of data, the arguments of these tests are annotated with
@ForAll to indicate the values for which the property should hold.
For primitive or built in types, random values can automatically be constructed
and fed to the test case.
For Interval arguments, the provided generator intervals() is used to
automatically construct a random valid Interval.
Notice that this one simple generator can be written once and used
repeatedly across unit tests.
The generated values can be filtered using Assume.that() as shown to perform
rejection sampling and restrict the values to which a test case applies.
Consider what properties (possibly from the list above) make sense to
check correctness for this method.
Complete your property based tests for intersects() in the noted section.
Your tests should fail if the implementation were incorrect.
Examine the code for CatManager in felinity.h.
This code is arguably challenging to test and poorly written.
Reflect on how you would rewrite the code to improve its testability
and add reliable test cases for it.
For the code components, make sure to upload the modified files to CourSys.
For the written answers, make sure to submit your answers via the text-entry forms in CourSys.