Reactivate basic testcases for melissa p2p
Try to activate at least some important testcases for melissa p2p and debug outcome:
- Take as an example the one in
examples/p2p/simulation.cxx
,script.py
in the same folder shows how a study is started -
test case where it runs different configuration till the end of the study (runners 1 - 3, 2 - 5 procs per runner) This is already done in test/test-p2p-basic.py
- REM: there is only one server core always in melissa p2p so far. This reduces the amount of possible configuration compared to melissa-da
- randomly selecting maybe 3 cases (as in
test-different-parallelism
of melissa-da) could be a good idea
-
test case where runner crashes and launcher restarts runner -
test case where server crashes - theoretically this is implemented and was tested manually. Still having an automated testcase is necessary
probably the tests for melissa-da only need to be modified (changing the simulation's and server's executable name to be able to crash them, create a validation data set)
-
To create a validation dataset, the examples/p2p/simulation.cxx
should do something more interesting andcalculate_weight.py
as well! probably it is hard to have THE ONE validation dataset since particle filters are heuristic! so maybe only check that the output contains the correct number of assimilation cycles and so on? How do we output results? On the runner side like in simulation1 from the melissa-da examples? - This should at least work for testing. -
maybe start off from the p2p-ci branch in melissa-da
-
also run all testcases once with random sleep in the simulation and once without ;)