New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build Jobs: Parts of Test Suite Fail Regularly #2439
Comments
Thank you for your summary of these problems! Is it maybe possible to disable the jobs only at the places where they are failing? |
For the |
Thank you for your input! Separating the problematic jobs might make the rebuild cycles shorter. But I think it is clear that we do not want any manual rebuilds at all. So we have the options:
What do you think? |
hardly possible as long as we utilize
This feels dirty to me.
Seems to be the option that causes the least discomfort, although having manual regression tests is not nice either. |
As discussed in the meeting: we should disable the tests. |
Alternative also discussed in the meeting: Using Running My proposal would to call But if the problem really is high server load that won't help much. Instead we could try IMO still the best option would be to disable the tests and create a small build job that only installs the dependencies needed by these plugins/libraries, only compiles what is necessary and only run the problematic tests. That way we could get the runtime probably done to a few minutes, in which case manual restarting would be acceptable I think. For comparison our FreeBSD jobs currently take about 10 min (7 min build, 2 min test, 1 min other) to run ~200 tests. PS. Not sure about our setup, but restarting a jenkins pipeline from a certain stage should be possible |
Thank you for looking into it!
@ingwinlu did a lot of work in this direction. Our servers have the highest throughput with high load. I.e. we would slow down our tests with such options.
Modular test cases is very difficult to achive and maintain. @ingwinlu put a lot of work into it. I think we cannot put this effort again only for a few unreliable tests.
That would be great. But I do not see the restart button in our GUI. Do we need another plugin or a newer version? @ingwinlu tried to add "jenkins build * please" for all pipeline steps, unfortunately, it did not work. |
This update should get rid of most of the temporary test failures reported in issue [ElektraInitiative#2439](https://issues.libelektra.org/2439). This commit closes ElektraInitiative#2439.
This update should get rid of most of the temporary test failures reported in issue [ElektraInitiative#2439](https://issues.libelektra.org/2439). This commit closes ElektraInitiative#2439.
It seems like we still have failures (dbus see #2532) |
What about excluding the dbus test cases for the Mac builds? |
Yes, we do.
|
Were you able to reproduce it locally? We still do not know why this problem sporadically occurs. If you have any input, it would be great. Maybe we can simply exclude the tests from the problematic build jobs? Or do the dbus* testcases fail on every build job where it runs? |
Unfortunately not. I'm on Ubuntu.
I just restarted the build job to see if it happens again. |
Please re-assign me if neccessary. |
I now implemented automatic retry of ctest in #3224. If you still experience temporary failures of the test suites please reopen the issue. (We can increase the number of tries.) For other failures of Jenkins/Docker, we need to find other solutions but first we finally need to do the migration. So please continue to restart the job in these cases. |
Description
I opened this as an issue to keep track all of the temporary test failures in one of the build jobs. The main reasons for the build failures are
. In a recent PR I had to restart the Jenkins build job 5 times before everything worked. In the PR after that I restarted the Jenkins build job thrice, as far as I can remember. Anyway, the failure rate is much too high in my opinion.
Failures
master
testmod_gpgme
(1)debian-stable-full
master
testmod_gpgme
(1),testmod_zeromqsend
(1)debian-stable-full-ini
master
testmod_crypto_botan
(1),testmod_fcrypt
(1),testmod_gpgme
(2),testmod_zeromqsend
(1)debian-stable-full-mmap
master
testmod_crypto_botan
(1),testmod_fcrypt
(2)debian-unstable-full
master
testmod_crypto_botan
(2),testmod_crypto_openssl
(3),testmod_fcrypt
(1)debian-unstable-full-clang
PR #2442
testmod_crypto_openssl
(1),testmod_gpgme
(1)debian-stable-full-ini
PR #2442
testmod_crypto_openssl
(1),testmod_crypto_botan
(1),testmod_fcrypt
(1),testmod_gpgme
(3)debian-stable-full-mmap
PR #2442
testmod_crypto_openssl
(1),testmod_fcrypt
(1)debian-unstable-full
PR #2442
testmod_crypto_openssl
(1),testmod_crypto_botan
(1),testmod_fcrypt
(1)debian-unstable-full-clang
PR #2442
testmod_dbus
(1),testmod_dbusrecv
(1)🍎 MMap
PR #2443
testmod_crypto_botan
(1),testmod_fcrypt
(1)debian-unstable-full
PR #2443
testmod_crypto_openssl
(1),testmod_crypto_botan
(1)debian-unstable-full-clang
PR #2443
testmod_dbus
(1),testmod_dbusrecv
(1)🍎 MMap
PR #2445
testmod_crypto_openssl
(1),testmod_crypto_botan
(1),testmod_fcrypt
(1)debian-stable-full-ini
PR #2445
testmod_crypto_openssl
(2),testmod_crypto_botan
(2),testmod_fcrypt
(2),testmod_gpgme
(1)debian-stable-full-mmap
PR #2445
testmod_crypto_openssl
(2),testmod_fcrypt
(2)debian-unstable-full
PR #2445
testmod_dbus
(1),testmod_dbusrecv
(1)🍏 GCC
The text was updated successfully, but these errors were encountered: