Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • batsim batsim
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 38
    • Issues 38
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • Deployments
    • Deployments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • Repository
  • Activity
  • Graph
  • Create a new issue
  • Commits
  • Issue Boards
Collapse sidebar
  • batsim
  • batsimbatsim
  • Issues
  • #63
Closed
Open
Issue created Mar 02, 2018 by Millian Poquet@mpoquetOwner4 of 5 checklist items completed4/5 checklist items

Tests: use robin rather than exec*

Robin is now used to run experiments.

We should think about using it to run Batsim tests. This would imply to:

  • call robin, either by:
    • generating many robin input files
    • calling robin from its CLI from scripts with parameters
  • write check scripts in specific files (they are currently exec* postcommands)
  • write wrapper scripts (that call robin then check the result)

Flatten tests?

It could be the occasion to flatten Batsim tests.
Currently, a test run (a lot of) simulation instances. For each instances, it makes sure the simulation ends correctly [and checks simulation output or not depending on the test].
In a flatten architecture, each simulation instance [+ check] would be a specific test.

Flattened tests can still remain modular, as in CMake a test is just a command to launch.
For example, we could create a script for each current test, and create myriads of tests by calling such scripts multiple times with different parameters.

Flattening pros:

  • Very easy to determine which simulation instance fails.
    Currently we have to find it in the execN log (e.g., read_csv('instances_info.csv') %>% filter(status=='skipped')).
  • Easier to debug, as determining which instance fails and reexecuting it is easier.

Flattening cons:

  • Each instance would require a unique name, so we need some caution when generating them.
  • Will generate some CMake noise (but we could use CMake loops to avoid most of it)
Edited Apr 19, 2018 by Millian Poquet
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking