1. 25 Mar, 2019 1 commit
    • sebastien letort's avatar
      New gitlab-ci, with 3 stages build, test and cleanning. · 6c9556d8
      sebastien letort authored
      I assume the runner will have docker and docker-compose installed.
      /!\ gitlab-runner has to be a sudoer, without asking for password. /!\
      
      The build is only the call to ./bootstrap.
      The tests are done with pylint3 (can fail) and the 'test' (for unit and functionnal tests).
      And to terminate, cleanning is always done. This is required because otherwise bootstrap will try to erase previous data without being sudoer, and this will fail.
      
      Also to reduce the creation of files created by "root", we do not generate python cache files for the controller.
      6c9556d8
  2. 26 Sep, 2018 1 commit
    • BAIRE Anthony's avatar
      Allow importing a webapp from a legacy allgo instance · 51f51d9c
      BAIRE Anthony authored
      
      This adds two views:
      
      - WebappImport for importing the webapp (but without the versions).
        The import is allowed if the requesting user has the same email
        as the owner of the imported app. The webapp is created with
        imported=True, which enables the WebappVersionImport view
      
      - WebappVersionImport for requisting the import of webapp version.
        This only creates the WebappVersion entry with state=IMPORT
        (the actual import is performed by the controller)
      
      A version may be imported multiple times. In that case, the newly
      imported version overwrite the local version with the same number.
      
      This features requires:
      - that the rails server implements !138
      - that the docker daemon hosting the sandboxes is configured with
        credentials for pulling from the legacy registry
      51f51d9c
  3. 18 Sep, 2018 1 commit
    • BAIRE Anthony's avatar
      run job containers as an ordinary user · 8e55e780
      BAIRE Anthony authored
      The "UID:GID" is configurable in the JOB_USER environment.
      
      This config is the same for all the jobs. In production this has
      to be set to the squashed uid/gid configured in the NFS exports
      so that we can read/write job files.
      8e55e780
  4. 27 Jun, 2018 4 commits
    • BAIRE Anthony's avatar
      Use the redis db to trigger controller actions · 01dd48e6
      BAIRE Anthony authored
      This commit removes the old notification channel (socket listening
      on port 4567), and uses the redis channel 'notify:controller' instead.
      
      The django job creation views are updated accordingly.
      01dd48e6
    • BAIRE Anthony's avatar
      Stream job logs and job state updates to the user · 1bb4acf4
      BAIRE Anthony authored
      This commit makes several changes.
      
      In the controller:
      
      - duplicates the logs produced by the jobs. Initially they were only
        stored into allgo.log, now they are also forwarded to the container
        output (using the 'tee' command) so that the controller can read
        them
      
      - add a log_task that reads the logs from docker and feeds them into
        the redis db key "log:job:<ID>" (this is implemented with aiohttp
        in order to be fully asynchronous)
      
      - store the job state in a new redis key "state:job:<ID>"
      
      - send notification to the redis pubsub 'notify:aio' channel when
        the job state has changed or when new logs are available
      
      In the allgo.aio frontend:
      
      - implement the /aio/jobs/<ID>/events endpoints which streams all
        job events & logs to the client (using json formatted messages)
      
      In django:
      
      - refactor the JobDetail view and template to update the page
        dynamically for job updates (state/logs)
          - allgo.log is read only when the job is already terminated.
            Otherwise the page uses the /aio/jobs/<ID>/events channel
            to stream the logs
          - the state icon is patched on the page when the state changes,
            except for the DONE state which triggers a full page reload
            (because there are other parts to be updated)
      1bb4acf4
    • BAIRE Anthony's avatar
      update the location of the job files · c5f93183
      BAIRE Anthony authored
      now located in the 'django' container and full path is
      "{DATASTORE}/{JOB_ID}"
      c5f93183
    • BAIRE Anthony's avatar
      add a redis client in the controller · 553fee62
      BAIRE Anthony authored
      553fee62
  5. 09 Apr, 2018 2 commits
  6. 20 Nov, 2017 1 commit
  7. 14 Nov, 2017 1 commit
    • BAIRE Anthony's avatar
      refactor the management of swarm/sandbox resources · 0e301e74
      BAIRE Anthony authored
      - add SwarmAbstractionClient: a class that extends docker.Client and
        hides the API differences between the docker remote API and the
        swarm API. Thus a single docker engine can be used like a swarm
      
      - add SharedSwarmClient: a class that extends SwarmAbstractionClient
        and monitors the swarm health and its resource (cpu/mem) and manages
        the resource allocation.
        - resources are partitioned in groups (to allow reserving resources
          for higher priority jobs)
        - two SharedSwarmClient can work together over TCP in a master/slave
          configuration (to allow the production and qualification platforms
          to use the same swarm without any interference)
      
      - the controller is modified to:
        - use SharedSwarmClient to:
          - wait for the end of a job (in place of DockerWatcher)
          - manage resource reservation (LONG_APPS vs. BIGMEM_APPS vs normal
            apps) and monitor swarm health (fix #124)
          - NOTE: resources of the swarm and sandbox are now managed
            separately (2 instances of SharedSwarmClient), whereas it was
            global before (this was suboptimal)
        - rely on SwarmAbstractionClient to compute the cpu quotas
        - store the container_id of jobs into the DB (fix #128), this is a
          prerequisite to permit renaming apps in the future
        - store the class of the job (normal vs. long app) in the container
          name (for the resource management with SharedSwarmClient)
        - read the configuration from a yaml file (/vol/ro/config.yml) for:
          - cpu/mem quotas
          - swarm resources allocation policy
          - master/slave configuration
      0e301e74
  8. 20 Apr, 2017 1 commit
  9. 21 Mar, 2017 1 commit
  10. 15 Mar, 2017 1 commit
    • BAIRE Anthony's avatar
      replace pipecmd with a real ssh connection · e789f843
      BAIRE Anthony authored
      - sshd server installed in the toolbox
      - ssh keys & config stored in ssh:/vol/cache and mounted as
        /.sandbox inside the sandbox
      - toolbox mounted as /.toolbox inside the sandbox
      - ssh agent & X11 forwarding are now working
      - the toolbox commands available by default in every sandboxes
        (vim, less, nc, scp, ...)
      - sandboxes now attached to a separate network (named
        'allgo_sandboxes' by default)
      
      fix #88
      e789f843
  11. 28 Feb, 2017 1 commit
  12. 09 Feb, 2017 1 commit
  13. 31 Jan, 2017 1 commit
  14. 10 Jan, 2017 2 commits
  15. 12 Dec, 2016 1 commit
  16. 24 Nov, 2016 1 commit
  17. 15 Nov, 2016 1 commit