1. 18 Jun, 2014 2 commits
  2. 09 May, 2014 1 commit
  3. 05 Feb, 2014 1 commit
  4. 30 Oct, 2013 1 commit
  5. 20 Oct, 2013 1 commit
  6. 23 Nov, 2012 1 commit
  7. 22 Nov, 2012 1 commit
  8. 29 Feb, 2012 1 commit
  9. 22 Nov, 2011 1 commit
    • Thomaas Herault's avatar
      Allow to create a Mac OS X bundle: · c6b9982c
      Thomaas Herault authored
        - add a template info.plist
        - add the CMakeLists magic
        - add a fix on main.cpp: Finder passes an argument (-psn_%d_%d) to any bundled app, that Cocoa is supposed to remove, and Qt apparently does not remove. Remove it by hand until someone figures how to make Drag & Drop work with Mac OS X.
  10. 26 Sep, 2011 1 commit
  11. 12 Sep, 2011 1 commit
  12. 28 Jul, 2011 1 commit
  13. 26 Jul, 2011 2 commits
  14. 06 Jul, 2011 1 commit
    • Augustin Degomme's avatar
      Features : · b8bdfff3
      Augustin Degomme authored
      - add a second thread to separate OTF file operations and trace operations : not as much parallelism as in Paje as file operations are more efficient for OTF (as for Paje parsing, flag MT_PARSING)
      - multithreaded loading of serialized files : moves loading from the main thread to multiple threads, as for serialization.
      - beginning of the work with MPI : work can now be distributed and several intervals loaded from several machines and displayed there. 
      How to use: the flag USE_MPI must be set, links  -lboost_mpi -lmpi -lmpi_cxx must be added in src.pro
      mpicc and mpic++ have to be used instead of gcc and g++. This can be set manually in src/makefile, but will be reseted by the global makefile. Another solution is to add
      QMAKESPEC = mpicc
      QMAKE_CXX = mpic++
      to the src.pro file.
      To launch on a single machine :
      mpirun -np nprocess vite path/to/file.vite2/configure
      on several machines with separated screens and a machinefile to list them :
      - allow ssh authentification without password (by key)
      - allow display on each distant machine with "xhost +"
      - have a split folder in a folder accessible with the same path for each machine (nfs or local)
      - have vite in the path for each machine
      - use mpirun -np nprocess -hostfile machinefile -mca orte_rsh_agent "ssh -X" -xDISPLAY=:0.0 vite -tInterval path/to/file.vite
      This will split the given interval in nprocess parts and send them for display to the various nodes described, displaying on their local display
  15. 30 Jun, 2011 1 commit
  16. 23 Jun, 2011 1 commit
    • Augustin Degomme's avatar
      A lot of testing and feedback on this one is needed, in order to improve the way it is done. · fbf0b6c1
      Augustin Degomme authored
      Summary :
      - trace can be dumped on disk while parsing
      - data can be restored, loading only in memory the part of trace we want to display (by time and by containers).
      - a light preview version of the whole trace can be displayed, allowing to chose the interval and actually load data from it
      How it works :
      - allow Serialization of IntervalOfContainers to the disk while parsing. Each finished IntervalOFContainer (containing 10000 states or links) can be dumped to the disk, using Boost serialization library, in a separate file, then its memory is freed, allowing to parse huge files (tested with 8Gb). Each type, container, linked in the IntervalOfContainer is assigned a unique id to avoid serializing too much data (file Serializer.hpp). If Boost with gzip is used, the resulting data is compressed. This is handled by Several SerializerWriter threads, and the assignment to each thread is done by a Singleton object, the SerializerDispatcher. The number of threads used is the number of CPUs found in the machine.
      At the end of parsing all remaining loaded intervalOfContainers are dumped. File naming is "Unique ID of the container"_"IntervalOfContainer index in the container". They are saved in a folder named after the trace file, without extension.  
      At the end of dumping, we have a folder containing many files. A file called "name of the trace".vite is created in this folder, which handles all containers, types, with their unique IDs. For each IntervalOFContainer of each Container, the beginning and end timings are also saved. This file will be used to correlate data from the multiple IntervalOfContainers files. It stores also the sum of all the times of all StateTypes encountered in each intervalOFContainers.
      - we can now open this .vite file. A ParserSplitted is then used, allowing to restore the structure of the trace and all types. 
         - If the -t option is specified with a time interval, data is directly reloaded from the serialized files, loading in memory only the intervalofcontainers in the time interval.
         - If the -t option was not specified, we load the preview version of the trace, contained in the .vite file.
      The preview version only stores states for the moment. When browsing the preview version, user can select a zone and press ctrl. This opens a new vite window, with the same zoom, but the data is then loaded from the serialized files.
      How to use : 
      needed libraries : libboost_serialization, libboost_thread, and libboost_iostreams . These libraries are in the standard boost package. In linux, they include the gzip library needed for compression and bindings. In windows, this library is not included and has to be included after, and boost recompiled.
      - cmake : activate the option VITE_ENABLE_SERIALIZATION in order to check for boost libraries, and to add corresponding files 
      - configure :  add the flag --use_boost if libraries are in /usr/lib, --boost_libdir=$dir else.
      - by hand in the src.pro file :  add needed libraries ( -lboost_serialization -lboost_thread -lboost_iostreams ) and flags USE_ITC, BOOST_SERIALIZE, and BOOST_GZIP to activate everything
      - make preview + -c option work together ( -c and -t work together for the moment, -c and preview also, but not when loading actual data from disk)
      - add other data to the preview (links events and variables)
      - check if using lots of threads to compress is really useful
      - better balance between those threads, without rebinding qt signal/slots each time
      - tests, tests and tests.
      - documentation and comments.
  17. 13 Jun, 2011 1 commit
    • Augustin Degomme's avatar
      feature : · 172244b3
      Augustin Degomme authored
      add IntervalOfContainers which are described in https://gforge.inria.fr/plugins/wiki/index.php?NewDataStructPage&id=1596&type=g
      A new IntervalOfContainer is built only when a certain (10000 for the moment) number of StateChanges or Links are attached to it. (initially only StateChanges were taken into account, but Links are created in parent containers without States, so they were all allocated in the same IntevalOfContainers, making them huge).
      This is not activated right now by default, but can be turned on by setting the flag USE_ITC before compiling to perform tests.
  18. 11 Jun, 2011 1 commit
    • Augustin Degomme's avatar
      continuing the flood of your mailboxes · e46d22b3
      Augustin Degomme authored
      - a few warnings removed for windows again
      - node selection while zooming now almost keeps the zoom (I don't get why the min changes a little bit, and don't know if it's possible to fix this)
      new features:
      -multithreaded Paje parser : this parser uses 3 threads :
         - the parsing thread to read the file and produce lines and tokens (lexical analysis), and aggregate them in blocks of 10000 lines 
         - the builder thread which handles these blocks of lines and calls store_event of the ParserEventPaje and transforms the tokens into the appropriate types, checks the correctness of the line (syntaxic analysis) but doesn't perform the calls to trace and the structural verifications 
        - the trace building thread which performs semantic analysis (if types, containers, exist) and performs the calls to the trace (adding events, states to the trace).
      - file mapping : just for multithread version, the file is mapped into memory by chunks (100MB for the moment), which is faster and allows to handle larger files without using too much memory. The limit of 1Gb of the other version is removed, ViTE can now handle much larger files.
      These features need more testing and feedback and can be activated at compile time by setting the flag MT_PARSING. 
      note: the parser still uses a tokens number limit, this will be merged with the new version soon.
      - gracefully stop parsing when cancel button is hit is not handled yet and causes segfaults
  19. 09 Jun, 2011 1 commit
    • Augustin Degomme's avatar
      fixes : · 6a4961db
      Augustin Degomme authored
      - fix build on windows platforms where getopt is not present : add a version of getopt called xgetopt (license ok) when called on Windows.
      - fix parsing issues with Paje traces on Windows, caused by the switch to std::getline, and the fact that it now removes the endline character (the character after the end of an std::line is seen as a \n on linux, so the bug is not apparent but still present)
      feature :
      - add a window that allows the user to select the containers to display, reorder them or hide them (by drag and drop and checking/unchecking). The selected display can be saved to an xml file, and reloaded for another trace. Found in Preferences/Node Selection
      - add the flag -c to specify such an xml file to load initially for a trace
      known issues and todo :
      - only works with OpenGl render : separate interface from work on the xml file to allow use with SVC
      - zoom is badly handled, and containers are not redrawn with new sizes yet
      - no tests done with non Paje traces, should work though
      - lacks comments and cleaning
      - put the window as a plugin ?
  20. 08 Feb, 2011 1 commit
  21. 19 Aug, 2010 1 commit
  22. 19 Jul, 2010 1 commit
  23. 24 Jun, 2010 2 commits
  24. 21 Apr, 2010 1 commit
  25. 12 Mar, 2010 1 commit
  26. 17 Dec, 2009 3 commits
  27. 16 Dec, 2009 1 commit
  28. 20 Oct, 2009 1 commit
  29. 03 Sep, 2009 1 commit
  30. 21 Aug, 2009 1 commit
  31. 20 Aug, 2009 1 commit
  32. 11 Aug, 2009 1 commit
  33. 05 Aug, 2009 3 commits