In your previous software engineering courses, you should have at least seen and hopefully used unit testing. You should also understand the important role that testing (and unit testing in particular) plays in agile software development. In the context of this course, you will be expected to create tests built upon the Google Test framework for C++. In this exercise, you will see how to integrate unit testing using the Google Test framework into a C++ project built using CMake. You will gain first hand experience writing some simple unit tests, and you will be provided with resources that allow you to use more complex testing patterns within your unit tests. In addition, you will see how to set up continuous integration into a project in GitHub so that you can get immediate information about a project including failing tests, test coverage information, static analysis results, and other project analysis artifacts.
For outside information, refer to:
Please recall that you should be using out of source builds. You will ultimately submit only the source code of your project and not any of the build artifacts. You should also not modify any of the provided header files or non-test source code files. They will be replaced during grading.
Submissions that do not compile and run from a clean build directory using the commands
will receive 0 points.
The provided files for this exercise illustrate a
reasonable way to incorporate Google Test into a project. The project itself
implements a small library with a variety of different functionalities. The
source and header files for this library reside in the lib/
directory. Your
tasks as a part of this exercise will be to test this project using the
facilities in the Google Test framework. All of the tests for the project will
live in the test/
directory. The test directory also includes the source
code of Google Test and Google Mock inside test/lib/
. Including the source
code of Google Test in the project and compiling it as a part of the project
has two key advantages. (1) It ensures that the version of Google Test used to
run the test suite is consistent. (2) It avoids some
subtle corner cases
involved with compiling, linking, and testing native code specifically.
In practice, you could instead include a more compact version of Google Test
and Google Mock.
All of the files that you need to modify for the first 3 tasks can be found in
the test/
directory.
NOTE: The files inside lib/
will be replaced during the
grading process.
Do not modify the files in those directories to create your tests.
Remember to follow the instructions carefully, as projects will be graded
(mostly) automatically. Specifically, make sure to spell fixture or test group
names exactly as specified.
To create a set of related test cases, you should create a C++ source file
in the test/
directory. Make sure to include gtest/gtest.h
and/or
gmock/gmock.h
as necessary. All of the source files for your tests should
then be added to the list of source files for creating the runAllTests
program
in test/CMakeLists.txt
. You can do this by editing the function call to
add_executable
in that file. Notice that there is also an add_test
function
call in test/CMakeLists.txt
. You do not need to modify this. The libraries for
Google Test and Google Mock will be linked in by the CMakeLists.txt
configuration already, as you can see on the target_link_libraries
lines.
Open the files lib/simple/include/Parallelogram.h
and lib/simple/Parallelogram.cpp
.
These files provide the declaration and definition of a simple Parallelogram
class.
The constructor of Parallelogram
takes in the integral lengths of the sides of a
Parallelogram
and the measure of one interior angle in floating point degrees.
By contract, the angle must be between 0 and 90 degrees, excluding 0 and including 90.
Note that there are bugs in the getPerimeter()
, getArea()
, and getKind()
methods.
You must complete the following tasks for the Parallelogram
class using
the Google Test framework:
Parallelogram
that do
not result in failure. Validate this by actually running the test cases.Each of these test cases should be in its own test function, and the group name
or test fixture for the tests should be called ParallelogramTests
. This name
will be used to automatically extract individual tests during grading.
Now consider the function checkMatthewsOutcome()
declared in Matthews.h
and
defined in Matthews.cpp
. Create a set of related tests such that every
statement in checkMatthewsOutcome()
is executed by at least one test.
The group name or test fixture for the tests should be called MatthewsTests
.
Make sure the name is correct.
Consider the performAwardCeremony
function declared in Awards.h
and
defined in Awards.cpp
. This function reads a list of names from a sequence and
awards medals to the first three names. Write a test case that makes sure the
ceremony runs as intended. You will need to create a stub for RankList
and a
mock for AwardCeremonyActions
. The methods of AwardCeremonyActions
should be
called exactly once each in the order: playAnthem()
, awardBronze()
,
awardSilver()
, awardGold()
, and turnOffTheLightsAndGoHome()
. The
getNext()
method of RankList
should be called three times, and the names
returned should be passed to awardBronze()
, awardSilver()
, and awardGold()
in the same order they are read from the list.
I highly recommend that you consult the
Google Mock Dummies Guide
in order to make sure that you (1) correctly create the test fakes, (2) validate
that the methods were called, and (3) validate that they were called in the
right order and with the right arguments.
The group name or test fixture for the test should be called AwardsTests
.
Continuous integration (CI) is a development practice where developers feed their progress into a source repository as they develop (even several times a day) rather than only after a substantial amount of work is done. Automated checks such as unit testing and static analysis can then provide constant and convenient feedback on the code so that problems are identified and remediated early (before they propagate). In practice, this often works by connecting a repository to a CI server that can run these helpful tasks after every push and even email developers when problems arise. In this task, you will configure CI for your unit testing project via GitHub and see how it can provide useful information. While it is also possible to use this infrastructure to configure nightly feedback or flexible event driven feedback, we will focus on pushes to the repository.
You can complete this portion of the exercise in CSIL, in the course docker image, or on your team server if you have received it when this exercise is released. To complete it in CSIL, again make sure that you are using the provided virtual environment for the course. Be careful to shut down any runner in CSIL after completing the exercise.
You first need to create an SFU GitHub repo for your project and configure it for CI. Then, you will set up a runner, which is the term for a server that completes CI tasks, and connect it to the repository. To create your project repo:
exercise-unit-tests
.
Do not initialize the project with a readme.1
2
3
4
5
6
7
cd <PROJECT FOLDER>
git init
git remote add origin git@github.sfu.ca:<YOUR USERNAME>/exercise-unit-tests.git
git add .
git commit -m "Initial commit"
git branch -M main
git push -u origin main
.github/workflows/ci.yml
to the root of your project.
The .github
folder contains additional repo information that GitHub can use.
The workflows
folder within that contains information for specific tasks that
should occur in response to events on the repository.
The ci.yml
file will control which tasks are executed and which artifacts are
preserved by the CI process. To start out, the file should contain:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
name: CI Pipeline
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build:
runs-on: self-hosted
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build project
run: |
mkdir build
cd build
cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=True ../
make
- name: Upload build
uses: actions/upload-artifact@v3
with:
name: build-dir
path: build/
retention-days: 1
test:
needs: build
runs-on: self-hosted
steps:
- name: Download build
uses: actions/download-artifact@v3
with:
name: build-dir
- name: Run tests
run: |
chmod +x ./test/runAllTests
./test/runAllTests
analyze:
needs: build
runs-on: self-hosted
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run cppcheck
run: cppcheck lib/* 2> cppcheck-results.txt
- name: Run clang-tidy
run: |
mkdir build
cd build
cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=True ../
run-clang-tidy -checks='*,-fuchsia-*,-google-readability-todo,-llvm-*, -google-runtime-references,-misc-unused-parameters, -google-readability-namespace-comments,-readability-else-after-return,-modernize-use-trailing-return-type' '^(?!.*(googletest|test|networking|nlohmann)).*' > clang-tidy-results.txt
- name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: cppcheck-results
path: cppcheck-results.txt
retention-days: 1
- name: Upload clang-tidy results
uses: actions/upload-artifact@v3
with:
name: clang-tidy-results
path: ./build/clang-tidy-results.txt
retention-days: 1
After creating this file, add
and commit
it to your repo and push
.
If you look at the Actions
page of your repository, you should be able to
see your commit and see that it has the "queued" status.
This is because you have not yet configured a runner on which the
jobs associated with CI can execute.
You can configure a new runner for your repository on your local machine
or on another server, but if the machine is inaccessible, the runners will not
be able to run.
We will now configure a runner for your repository. On the GitHub page for your
repo, go to "Settings" -> "Actions" -> "Runners". Select "New self-hosted runner".
Download and configure the runner using the instructions on that page. During
configuration, you will be able to choose a label for the runner.
In this case, the default name self-hosted
matches the ci.yml
file so that
your runner will be selected for performing the CI tasks. After completing the
configuration, you should see your runner listed on the "Runners" page. If you
have not yet run ./run.sh
, then it will be listed as "Offline".
Run ./run.sh
and the runner will be marked "Active" as it is processing your
project and then "Idle" when it is done. If you want to install the runner
as a background service rather than running it as an application,
then as an admin you can use
1
2
sudo ./svc.sh install
sudo ./svc.sh start
but you may not want to do this on your personal machine.
Your CI should now be set up. In order to test it, let's add a file to the repository and push it. This will trigger the CI pipeline.
1
2
3
4
echo "unit testing and CI save time." > README.md
git add README.md
git commit -m "Added a simple README."
git push
It is also possible to have jobs execute inside a Docker container by specifying the image of the container. For example, if we wanted to build inside the course container, we could add:
1
2
container:
image: nsumner/cmpt373:fall2024
under the build
job.
The jobs
section describes the tasks to be performed upon each commit. We
can see that there are three separate stages to start with: build, test, and
analyze. The test and analyze stages depend on build as indicated by needs
.
If any stage fails,
the process stops and the committer will receive an email. In this case, because
your test cases fail, you should have received an email alerting you to the
results of running your unit tests. Clicking on "View workflow run" and then
"tests" should show the google test results.
Because the analyze stage does not depend on the test stage,
we still get analysis results even when the tests fail.
In the "Summary" for the commit action, you should see an "Artifacts"
section at the bottom. This contains all of the artifacts that we uploaded
using the upload-artifact
action in the CI file. The cppcheck and clang-tidy
results are saved there and can be downloaded and examined.
You may be interested in knowing how well you are testing with your unit tests. This is often measured in terms of statement coverage. Statement coverage is simply the number of lines of your code base that a test suite executes compared to the total number of lines in the program. We can slightly modify our project to make it so that the statement coverage of our unit tests is extracted and integrated into the CI.
In the main CMakeLists.txt
, define the arguments for coverage instrumentation
by adding this before any targets are generated:
1
2
3
4
5
6
7
8
9
10
11
if (ENABLE_COVERAGE)
# NOTE: Coverage only works/makes sense with debug builds
set(CMAKE_BUILD_TYPE "Debug")
if("${CMAKE_CXX_COMPILER_ID}" MATCHES "(Apple)?[Cc]lang")
set(CXX_COVERAGE_FLAGS "-fprofile-instr-generate -fcoverage-mapping")
else()
message(FATAL_ERROR "Coverage generation is only supported for clang")
endif()
message("Enabling coverage instrumentation with:\n ${CXX_COVERAGE_FLAGS}")
endif()
This defines a CXX_COVERAGE_FLAGS
variable with appropriate arguments when
clang
is used for compilation. Note, other compilers can also generate
coverage reports, but we will only use clang
for it in this walkthrough.
Then modify the simplelibrary CMakeLists.txt
by adding:
1
2
3
4
set_target_properties(simple
PROPERTIES
COMPILE_FLAGS "${CXX_COVERAGE_FLAGS}"
)
And similarly modify the testing CMakeLists.txt
by adding:
1
2
3
4
set_target_properties(runAllTests
PROPERTIES
LINK_FLAGS "${CXX_COVERAGE_FLAGS}"
)
These modifications enable the targets in our build process to add in the
coverage instrumentation when requested. Any component for which we desire
coverage information must add appropriate COMPILE_FLAGS
, and the final
executable must add the coverage arguments to its LINK_FLAGS
in order
for the instrumentation to be preserved.
Now, we must modify our CI tasks to include coverage information collection. Change the cmake invocation of the build stage to:
Then change the Run tests
portion of the test
task and add a step for
uploading the coverage results:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
test:
needs: build
runs-on: self-hosted
steps:
- name: Download build
uses: actions/download-artifact@v3
with:
name: build-dir
- name: Run tests
run: |
chmod +x ./test/runAllTests
LLVM_PROFILE_FILE="runAllTests.profraw" ./test/runAllTests --gtest_filter=-ParallelogramTests.*
llvm-profdata merge -sparse runAllTests.profraw -o runAllTests.profdata
llvm-cov show test/runAllTests -instr-profile=runAllTests.profdata -show-line-counts-or-regions -output-dir=coverage/ -format="html"
llvm-cov report test/runAllTests -instr-profile=runAllTests.profdata
- name: Upload coverage
uses: actions/upload-artifact@v3
with:
name: coverage
path: coverage/
retention-days: 1
Push all of your changes to your repository.
Notice that we are filtering out the ParallelogramTests
because they are failing.
If any test fails, the CI job will fail and the coverage report will not be
generated. Because the tests for the other tasks all pass, you should be able
to check your results for the MatthewsTests in the coverage artifacts!
Finally, after (and only after) completing all of the steps above, you should add the instructor and TAs as Collaborators in the repository.
First double check that you have named your fixtures well by trying these commands from your build directory:
To submit your exercise, you must again submit the clonable tag for your repository via CourSys. From the page for your repository, click "Code" -> "SSH" and copy the tag. For instance, my submission might be
Also download and submit the coverage.zip
from your last commit Action.