In your previous software engineering courses, you should have at least seen and hopefully used unit testing. You should also understand the important role that testing (and unit testing in particular) plays in agile software development. In the context of this course, you will be expected to create tests built upon the Google Test framework for C++. In this exercise, you will see how to integrate unit testing using the Google Test framework into a C++ project built using CMake. You will gain first hand experience writing some simple unit tests, and you will be provided with resources that allow you to use more complex testing patterns within your unit tests. In addition, you will see how to set up continuous integration into a project in GitLab so that you can get immediate information about a project including failing tests, test coverage information, static analysis results, and other project analysis artifacts.
For outside information, refer to:
Please recall that you should be using out of source builds. You will ultimately submit only the source code of your project and not any of the build artifacts. You should also not modify any of the provided header files or non-test source code files. They will be replaced during grading.
Submissions that do not compile and run from a clean build directory using the commands
will receive 0 points.
The provided files for this exercise illustrate a
reasonable way to incorporate Google Test into a project. The project itself
implements a small library with a variety of different functionalities. The
source and header files for this library reside in the lib/
directory. Your
tasks as a part of this exercise will be to test this project using the
facilities in the Google Test framework. All of the tests for the project will
live in the test/
directory. The test directory also includes the source
code of Google Test and Google Mock inside test/lib/
. Including the source
code of Google Test in the project and compiling it as a part of the project
has two key advantages. (1) It ensures that the version of Google Test used to
run the test suite is consistent. (2) It avoids some
subtle corner cases
involved with compiling, linking, and testing native code specifically.
In practice, you could instead include a more compact version of Google Test
and Google Mock.
All of the files that you need to modify for the first 3 tasks can be found in
the test/
directory.
NOTE: The files inside lib/
will be replaced during the
grading process.
Do not modify the files in those directories to create your tests.
Remember to follow the instructions carefully, as projects will be graded
(mostly) automatically. Specifically, make sure to spell fixture or test group
names exactly as specified.
To create a set of related test cases, you should create a C++ source file
in the test/
directory. Make sure to include gtest/gtest.h
and/or
gmock/gmock.h
as necessary. All of the source files for your tests should
then be added to the list of source files for creating the runAllTests
program
in test/CMakeLists.txt
. You can do this by editing the function call to
add_executable
in that file. Notice that there is also an add_test
function
call in test/CMakeLists.txt
. You do not need to modify this. The libraries for
Google Test and Google Mock will be linked in by the CMakeLists.txt
configuration already, as you can see on the target_link_libraries
lines.
Open the files lib/simple/include/Parallelogram.h
and lib/simple/Parallelogram.cpp
.
These files provide the declaration and definition of a simple Parallelogram
class.
The constructor of Parallelogram
takes in the integral lengths of the sides of a
Parallelogram
and the measure of one interior angle in floating point degrees.
By contract, the angle must be between 0 and 90 degrees, excluding 0 and including 90.
Note that there are bugs in the getPerimeter()
, getArea()
, and getKind()
methods.
You must complete the following tasks for the Parallelogram
class using
the Google Test framework:
Parallelogram
that do
not result in failure. Validate this by actually running the test cases.Each of these test cases should be in its own test function, and the group name
or test fixture for the tests should be called ParallelogramTests
. This name
will be used to automatically extract individual tests during grading.
Now consider the function checkMatthewsOutcome()
declared in Matthews.h
and
defined in Matthews.cpp
. Create a set of related tests such that every
statement in checkMatthewsOutcome()
is executed by at least one test.
The group name or test fixture for the tests should be called MatthewsTests
.
Make sure the name is correct.
Consider the performAwardCeremony
function declared in Awards.h
and
defined in Awards.cpp
. This function reads a list of names from a sequence and
awards medals to the first three names. Write a test case that makes sure the
ceremony runs as intended. You will need to create a stub for RankList
and a
mock for AwardCeremonyActions
. The methods of AwardCeremonyActions
should be
called exactly once each in the order: playAnthem()
, awardBronze()
,
awardSilver()
, awardGold()
, and turnOffTheLightsAndGoHome()
. The
getNext()
method of RankList
should be called three times, and the names
returned should be passed to awardBronze()
, awardSilver()
, and awardGold()
in the same order they are read from the list.
I highly recommend that you consult the
Google Mock Dummies Guide
in order to make sure that you (1) correctly create the test fakes, (2) validate
that the methods were called, and (3) validate that they were called in the
right order and with the right arguments.
The group name or test fixture for the test should be called AwardsTests
.
First double check that you have named your fixtures well by trying these commands from your build directory:
To submit your exercise, create an archive of the directory containing your source (not your build), and submit it via CourSys. This should contain all of the provided project files along with your additions and modifications necessary to run your tests.
NOTE: Your archive should contain the googletest-template/
directory from the
project archive as well as its subdirectories. This is necessary for your
submission to be graded. Again, this should not include your build artifacts.
To produce this, run:
Continuous integration (CI) is a development practice where developers feed their progress into a source repository as they develop (even several times a day) rather than only after a substantial amount of work is done. Automated checks such as unit testing and static analysis can then provide constant and convenient feedback on the code so that problems are identified and remediated early (before they propagate). In practice, this often works by connecting a repository to a CI server that can run these helpful tasks after every push and even email developers when problems arise. In this task, you will configure CI for your unit testing project via GitLab and see how it can provide useful information. While it is also possible to use this infrastructure to configure nightly feedback or flexible event driven feedback, we will focus on pushes to the repository.
Completing this portion of the exercise assumes that you have already been sent the address of the CI server that your team will be using for the term project in this course.
You first need to create a GitLab repo for your project and configure it for CI. Then, you will set up a Runner, which it the GitLab term for a server that completes CI tasks, and connect it to the repository. To create your project repo:
exercise-unit-tests
.
Do not initialize the project with a readme.1
2
3
4
5
6
cd <PROJECT FOLDER>
git init
git remote add origin git@csil-git1.cs.surrey.sfu.ca:<YOUR USERNAME>/exercise-unit-tests.git
git add .
git commit -m "Initial commit"
git push -u origin main
.gitlab-ci.yml
to the root of your project. This file
controls which tasks are executed and which artifacts are preserved by the CI
process. To start out, the file should contain:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
image: nsumner/cmpt373:fall2023
stages:
- build
- test
- analyze
build:
stage: build
script:
- mkdir build
- cd build
- cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=True ../
- make
artifacts:
expire_in: 10 mins
paths:
- build/*
test:
stage: test
script:
- build/test/runAllTests
analyze:
stage: analyze
script:
- cppcheck lib/* 2> build/cppcheck-results.txt
- cd build
- >-
/usr/lib/llvm-10/share/clang/run-clang-tidy.py
-checks='*,-fuchsia-*,-google-readability-todo,-llvm-*,
-google-runtime-references,-misc-unused-parameters,
-google-readability-namespace-comments,
-readability-else-after-return,-modernize-use-trailing-return-type' '^(?!.*(googletest|test|networking|nlohmann)).*' > clang-tidy-results.txt
artifacts:
expire_in: 1 hour
paths:
- build/cppcheck-results.txt
- build/clang-tidy-results.txt
After creating this file, add
and commit
it to your repo and push
.
If you look at the status of your push, you should see it as either pending
or failed. This is because you have not yet configured a runner on which the
jobs associated with CI can execute. You can do this on the server that was
set up for your group. Note that the gitlab-runner
infrastructure has already
been installed and configured, so you need only set up a specific runner for
this specific project, you can do that by:
>
.
Other lines are things that you should fill in as described. You can find the
information for your project on the left hand panel under
Settings > CI > Runners (click Expand).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
sudo gitlab-runner register
> Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
https://csil-git1.cs.surrey.sfu.ca/
> Please enter the gitlab-ci token for this runner:
<< TOKEN STRING FOR PROJECT >>
> Please enter the gitlab-ci description for this runner:
<< YOUR USERNAME >>-ci-exercise
> Please enter the gitlab-ci tags for this runner (comma separated):
<< Just hit enter to leave this blank >>
> Please enter the executor: ..., docker, ...:
<< Enter docker >>
> Please enter the default Docker image (e.g. ruby:2.1):
<< Enter nsumner/cmpt373:fall2020 >>
This should configure the runner. In response, you should see:
1
2
Runner registered successfully. Feel free to start it, but if it's running
already the config should be automatically reloaded!
If you see something else, you made a mistake and need to configure correctly before continuing.
1
sudo gitlab-runner start << YOUR USERNAME >>-ci-exercise
Your CI should now be set up. In order to test it, let's add a file to the repository and push it. This will trigger the CI pipeline.
1
2
3
4
echo "unit testing and CI save time." > README.md
git add README.md
git commit -m "Added a simple README."
git push
The images
section of the configuration selects a base Docker image to use
for CI tasks. In this case, it is an image created for this class that you
can see here. If you wish to add packages to the image, you can
also do so.
The stages
section describes the tasks to be performed upon each commit. We
can see that there are three separate stages to start with: build, test, and
analyze. The stages occur sequentially in the given order. If any stage fails,
the process stops and the committer will receive an email. In this case, because
your test cases fail, you should have received an email alerting you to the
results of running your unit tests and actually showing the google test results.
Note, because the tests, failed, we did not get any results from out static
analysis tools. We can fix this by changing their order in the stages list.
Reorder the list in .gitlab-ci.yml
so that analyze
appears before test
,
then commit and push your changes. If you wait a minute and then click on the
commit in GitLab, you will see green marks for the first two stages and a red
mark for the last.
If you hover over the second green mark, you should see: "analyze: passed".
If you click on it, you can see results for the job. In addition, the
artifacts
lines in the CI configuration temporarily saved the results of the
static analysis tools. If you want to see them, you can use the right hand
"artifacts" panel to download them. Note, this means that you can get
clang-tidy
and cppcheck
for free in the CI system and just download their
results.
You may be interested in knowing how well you are testing with your unit tests. This is often measured in terms of statement coverage. Statement coverage is simply the number of lines of your code base that a test suite executes compared to the total number of lines in the program. We can slightly modify our project to make it so that the statement coverage of our unit tests is extracted and integrated into the CI.
In the main CMakeLists.txt
, define the arguments for coverage instrumentation
by add this before any targets are generated:
1
2
3
4
5
6
7
8
9
10
11
if (ENABLE_COVERAGE)
# NOTE: Coverage only works/makes sense with debug builds
set(CMAKE_BUILD_TYPE "Debug")
if("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
set(CXX_COVERAGE_FLAGS "-fprofile-instr-generate -fcoverage-mapping")
else()
message(FATAL_ERROR "Coverage generation is only supported for clang")
endif()
message("Enabling coverage instrumentation with:\n ${CXX_COVERAGE_FLAGS}")
endif()
This defines a CXX_COVERAGE_FLAGS
variable with appropriate arguments when
clang
is used for compilation. Note, other copmilers can also generate
coverage reports, but we will only use clang
for it in this walkthrough.
Then modify the simplelibrary CMakeLists.txt
by adding:
1
2
3
4
set_target_properties(simpleLibrary
PROPERTIES
COMPILE_FLAGS "${CXX_COVERAGE_FLAGS}"
)
And similarly modify the testing CMakeLists.txt
by adding:
1
2
3
4
set_target_properties(runAllTests
PROPERTIES
LINK_FLAGS "${CXX_COVERAGE_FLAGS}"
)
These modifications enable the targets in our build process to add in the
coverage instrumentation when requested. Any component for which we desire
coverage information must add appropriate COMPILE_FLAGS
, which the final
executable must add the coverage arguments to its LINK_FLAGS
in order
for the instrumentation to be preserved.
Now, we must modify out CI tasks to include coverage information collection. Change the cmake invocation to:
1
cmake -DCMAKE_CXX_COMPILER=clang++ -DENABLE_COVERAGE=True -DCMAKE_EXPORT_COMPILE_COMMANDS=True ../
Then change the test
task to:
1
2
3
4
5
6
7
8
9
10
11
12
test:
stage: test
script:
- LLVM_PROFILE_FILE="runAllTests.profraw" build/test/runAllTests --gtest_filter=-ParallelogramTests.*
- llvm-profdata merge -sparse runAllTests.profraw -o runAllTests.profdata
- llvm-cov show build/test/runAllTests -instr-profile=runAllTests.profdata -show-line-counts-or-regions -output-dir=coverage/ -format="html"
- llvm-cov report build/test/runAllTests -instr-profile=runAllTests.profdata
coverage: '/TOTAL.*\s+(\S+\%)/'
artifacts:
expire_in: 1 hour
paths:
- coverage/*
Push your changes to your repository.
Notice that we are filtering out the ParallelogramTests
because they are failing.
If any test fails, the CI job will fail and the coverage report will not be
generated. Because the tests for the other tasks all pass, you should be able
to check your results for the MatthewsTests in the coverage artifacts!
The last llvm-cov report
line and the coverage:
regular expression also
allow GitLab to extract the summary line coverage information. GitLab uses the
regular expression to filter CI task output in order to identify code
coverage. The actual code coverage is shown on the test job page and can be
seen for all jobs in the history of a project via the CI/CD > Jobs page.
Finally, after (and only after) completing all of the steps above, you should add the instructor as a Developer to the repository.
First double check that you have named your fixtures well by trying these commands from your build directory:
To submit your exercise, you must again submit the clonable URL of your repository via CourSys. For instance, my submission might be
1
https://csil-git1.cs.surrey.sfu.ca/wsumner/exercise-unit-tests.git