Mocking timers when unit testing a communication library with timeouts
up vote
0
down vote
favorite
I have to maintain a rather big library implementing a communication protocol with different layers. Fortunately, there are thousands of unit tests for many different situations.
I have problems with those unit tests probing for timeouts within the library. The timeouts are in the range of a few seconds which is inacceptable long for unit-testing.
What we already did, is to implement a single central clock service which can be configured to run faster by a certain factor. By this, the tests complete much faster. But we cannot choose this factor too large, as then some tests begin to fail randomly because the performance of the test machine varies. Also, it is a nightmare to debug.
I feel that it is wrong to use any kind of independently running clock for testing timeouts. It would be better if this is completely under control of the unit test.
Does anyone have an idea how to implement it, optimally without having to change too much in the library code and in a way that is easy to add to the unit tests?
The library code with timeout is more or less structured as:
var startTime = TimeService.CurrentTime;
while (TimeService.CurrentTime < startTime + timeout)
// do something
Thread.Sleep(50); // or so
and it would be difficult to change this structure.
.net unit-testing time
|
show 2 more comments
up vote
0
down vote
favorite
I have to maintain a rather big library implementing a communication protocol with different layers. Fortunately, there are thousands of unit tests for many different situations.
I have problems with those unit tests probing for timeouts within the library. The timeouts are in the range of a few seconds which is inacceptable long for unit-testing.
What we already did, is to implement a single central clock service which can be configured to run faster by a certain factor. By this, the tests complete much faster. But we cannot choose this factor too large, as then some tests begin to fail randomly because the performance of the test machine varies. Also, it is a nightmare to debug.
I feel that it is wrong to use any kind of independently running clock for testing timeouts. It would be better if this is completely under control of the unit test.
Does anyone have an idea how to implement it, optimally without having to change too much in the library code and in a way that is easy to add to the unit tests?
The library code with timeout is more or less structured as:
var startTime = TimeService.CurrentTime;
while (TimeService.CurrentTime < startTime + timeout)
// do something
Thread.Sleep(50); // or so
and it would be difficult to change this structure.
.net unit-testing time
nodatime covers duration in some of it's test cases and it looks like some of them express the expected value in int64 to avoid converting the datetime back.
– lloyd
Nov 11 at 10:09
As for the random failure of certain test cases based on the machines performance, one way of covering this is to re-run the test case multiple times or somehow identify failed test cases as opposed to errors in a test case.
– lloyd
Nov 11 at 10:14
@lloyd Yes, re-running could help reducing false negatives, but I'm not a big fan of repeating tests until they finally succeed (might also hide real problems). Also it does not really allow me to further reduce the run time of the tests substantially.
– Klaus Gütter
Nov 11 at 10:58
MS took a different approach and eliminated their flaky tests. salesforce has the concept of resource pools. If you could separate out these tests and execute them sequentially (assuming you are executing them in parallel) and ensure they release all resources and the end of execution.
– lloyd
Nov 11 at 13:58
@lloyd thank you for the interesting links! But I still have the hope that the tests can be made this more deterministic by replacing the free-running clock with something under control of the test script.
– Klaus Gütter
Nov 11 at 14:18
|
show 2 more comments
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I have to maintain a rather big library implementing a communication protocol with different layers. Fortunately, there are thousands of unit tests for many different situations.
I have problems with those unit tests probing for timeouts within the library. The timeouts are in the range of a few seconds which is inacceptable long for unit-testing.
What we already did, is to implement a single central clock service which can be configured to run faster by a certain factor. By this, the tests complete much faster. But we cannot choose this factor too large, as then some tests begin to fail randomly because the performance of the test machine varies. Also, it is a nightmare to debug.
I feel that it is wrong to use any kind of independently running clock for testing timeouts. It would be better if this is completely under control of the unit test.
Does anyone have an idea how to implement it, optimally without having to change too much in the library code and in a way that is easy to add to the unit tests?
The library code with timeout is more or less structured as:
var startTime = TimeService.CurrentTime;
while (TimeService.CurrentTime < startTime + timeout)
// do something
Thread.Sleep(50); // or so
and it would be difficult to change this structure.
.net unit-testing time
I have to maintain a rather big library implementing a communication protocol with different layers. Fortunately, there are thousands of unit tests for many different situations.
I have problems with those unit tests probing for timeouts within the library. The timeouts are in the range of a few seconds which is inacceptable long for unit-testing.
What we already did, is to implement a single central clock service which can be configured to run faster by a certain factor. By this, the tests complete much faster. But we cannot choose this factor too large, as then some tests begin to fail randomly because the performance of the test machine varies. Also, it is a nightmare to debug.
I feel that it is wrong to use any kind of independently running clock for testing timeouts. It would be better if this is completely under control of the unit test.
Does anyone have an idea how to implement it, optimally without having to change too much in the library code and in a way that is easy to add to the unit tests?
The library code with timeout is more or less structured as:
var startTime = TimeService.CurrentTime;
while (TimeService.CurrentTime < startTime + timeout)
// do something
Thread.Sleep(50); // or so
and it would be difficult to change this structure.
.net unit-testing time
.net unit-testing time
edited Nov 11 at 9:25
asked Nov 11 at 7:31
Klaus Gütter
1,176612
1,176612
nodatime covers duration in some of it's test cases and it looks like some of them express the expected value in int64 to avoid converting the datetime back.
– lloyd
Nov 11 at 10:09
As for the random failure of certain test cases based on the machines performance, one way of covering this is to re-run the test case multiple times or somehow identify failed test cases as opposed to errors in a test case.
– lloyd
Nov 11 at 10:14
@lloyd Yes, re-running could help reducing false negatives, but I'm not a big fan of repeating tests until they finally succeed (might also hide real problems). Also it does not really allow me to further reduce the run time of the tests substantially.
– Klaus Gütter
Nov 11 at 10:58
MS took a different approach and eliminated their flaky tests. salesforce has the concept of resource pools. If you could separate out these tests and execute them sequentially (assuming you are executing them in parallel) and ensure they release all resources and the end of execution.
– lloyd
Nov 11 at 13:58
@lloyd thank you for the interesting links! But I still have the hope that the tests can be made this more deterministic by replacing the free-running clock with something under control of the test script.
– Klaus Gütter
Nov 11 at 14:18
|
show 2 more comments
nodatime covers duration in some of it's test cases and it looks like some of them express the expected value in int64 to avoid converting the datetime back.
– lloyd
Nov 11 at 10:09
As for the random failure of certain test cases based on the machines performance, one way of covering this is to re-run the test case multiple times or somehow identify failed test cases as opposed to errors in a test case.
– lloyd
Nov 11 at 10:14
@lloyd Yes, re-running could help reducing false negatives, but I'm not a big fan of repeating tests until they finally succeed (might also hide real problems). Also it does not really allow me to further reduce the run time of the tests substantially.
– Klaus Gütter
Nov 11 at 10:58
MS took a different approach and eliminated their flaky tests. salesforce has the concept of resource pools. If you could separate out these tests and execute them sequentially (assuming you are executing them in parallel) and ensure they release all resources and the end of execution.
– lloyd
Nov 11 at 13:58
@lloyd thank you for the interesting links! But I still have the hope that the tests can be made this more deterministic by replacing the free-running clock with something under control of the test script.
– Klaus Gütter
Nov 11 at 14:18
nodatime covers duration in some of it's test cases and it looks like some of them express the expected value in int64 to avoid converting the datetime back.
– lloyd
Nov 11 at 10:09
nodatime covers duration in some of it's test cases and it looks like some of them express the expected value in int64 to avoid converting the datetime back.
– lloyd
Nov 11 at 10:09
As for the random failure of certain test cases based on the machines performance, one way of covering this is to re-run the test case multiple times or somehow identify failed test cases as opposed to errors in a test case.
– lloyd
Nov 11 at 10:14
As for the random failure of certain test cases based on the machines performance, one way of covering this is to re-run the test case multiple times or somehow identify failed test cases as opposed to errors in a test case.
– lloyd
Nov 11 at 10:14
@lloyd Yes, re-running could help reducing false negatives, but I'm not a big fan of repeating tests until they finally succeed (might also hide real problems). Also it does not really allow me to further reduce the run time of the tests substantially.
– Klaus Gütter
Nov 11 at 10:58
@lloyd Yes, re-running could help reducing false negatives, but I'm not a big fan of repeating tests until they finally succeed (might also hide real problems). Also it does not really allow me to further reduce the run time of the tests substantially.
– Klaus Gütter
Nov 11 at 10:58
MS took a different approach and eliminated their flaky tests. salesforce has the concept of resource pools. If you could separate out these tests and execute them sequentially (assuming you are executing them in parallel) and ensure they release all resources and the end of execution.
– lloyd
Nov 11 at 13:58
MS took a different approach and eliminated their flaky tests. salesforce has the concept of resource pools. If you could separate out these tests and execute them sequentially (assuming you are executing them in parallel) and ensure they release all resources and the end of execution.
– lloyd
Nov 11 at 13:58
@lloyd thank you for the interesting links! But I still have the hope that the tests can be made this more deterministic by replacing the free-running clock with something under control of the test script.
– Klaus Gütter
Nov 11 at 14:18
@lloyd thank you for the interesting links! But I still have the hope that the tests can be made this more deterministic by replacing the free-running clock with something under control of the test script.
– Klaus Gütter
Nov 11 at 14:18
|
show 2 more comments
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53246706%2fmocking-timers-when-unit-testing-a-communication-library-with-timeouts%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
nodatime covers duration in some of it's test cases and it looks like some of them express the expected value in int64 to avoid converting the datetime back.
– lloyd
Nov 11 at 10:09
As for the random failure of certain test cases based on the machines performance, one way of covering this is to re-run the test case multiple times or somehow identify failed test cases as opposed to errors in a test case.
– lloyd
Nov 11 at 10:14
@lloyd Yes, re-running could help reducing false negatives, but I'm not a big fan of repeating tests until they finally succeed (might also hide real problems). Also it does not really allow me to further reduce the run time of the tests substantially.
– Klaus Gütter
Nov 11 at 10:58
MS took a different approach and eliminated their flaky tests. salesforce has the concept of resource pools. If you could separate out these tests and execute them sequentially (assuming you are executing them in parallel) and ensure they release all resources and the end of execution.
– lloyd
Nov 11 at 13:58
@lloyd thank you for the interesting links! But I still have the hope that the tests can be made this more deterministic by replacing the free-running clock with something under control of the test script.
– Klaus Gütter
Nov 11 at 14:18