Hi all!
I've done some research on what can we do to improve our CI system.
First of all, I want to clarify: our CI is consists of two parts: building
and testing. I'm assuming here that "build" part is OK for us and
doesn't
require any improvements in workflow or something like that. So focusing
only on "testing" part.
I tried to find some solutions similar to our testman, but the things are
very bad on this side.
- There are some plugins for Jenkins which show a table with test
results by each build
- Microsoft TFS has functionality for storing tests and doing some
analysis on them, but that's a no-go for us
- Jetbrains' TeamCity also has this thing and has a license for Open
Source
I use Jetbrains tools at work and like this company in general :) so I
decided make some experiments with it.
Some facts:
- It has out-of-the-box integration with Jira and Github. It works
flawlessly (bonus: you can automatically build and test all PRs from team
members, for example)
- Integration with GitHub Checks (that green checkmark which for us
comes from Travis&Appveyor right now)
- Has an interface for filtering test results by module, shows execution
time, some statistics etc.
- Uses JVM for work :) Both master and slave part
- PostgreSQL, MySQL and some built-in DB can be used for storage
- Uses own DSL, derived from Kothlin language for configuration (but
allows configuring through GUI too)
Adopting iso build didn't have any issues, it just works. But testing
requires some work:
- sysreg2 and rosautotest should be changed to output results in some
format which TeamCity understands. It can be jUnit, Google test or
teamcity's own format
- TeamCity has only three states of test: success, failure or crash.
This does not fit with our current scheme based on succeed and failed
ok()'s count
- TeamCity can't compare test results between random builds like our
testman. It only can show "tests which failed in this build, but didn't
fail in previous one". But if we want something more, we can use their API
While looking into the way how our tests work, I've found that by "test" we
mean a function defined my START_TEST macro, which does some reporting to
sysreg.
But each START_TEST function likely has a couple of further test_* calls,
which are not reported in any way. This is a place where the granularity of
reporting can be increased :)
Attaching some screenshots of how it look like in general.
https://drive.google.com/open?id=17r4PSFmsi2tiF97HHbYGiMCOoKR7VMUF
So far we have to decide what to do now. An options can be:
- Go on with adopting test&build infrastructure to TeamCity - we will
migrate to it eventually
- We are not going to use TeamCity - somebody should develop a custom
web interface to consolidate everything (
https://www.reactos.org/wiki/Google_Summer_of_Code_2019_Ideas#Developer_Web…
)
- Just migrate to new BuildBot (can be done with along with previous
option)
- Don't do anything
Looking forward to discuss this on the meeting
Cheers,
Victor