- Oct 12, 2020
-
-
AUDEBERT Nicolas authored
-
- Sep 21, 2020
-
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
- Jun 26, 2020
-
-
Ayush Pandey authored
Co-authored-by: Ayush Pandey <ayush@eventregistry.org>
-
- May 02, 2020
-
-
Thor Tomasarson authored
-
- Apr 07, 2020
-
-
Sayantan Das authored
* Added Salinas * joblib is now independent of sklearn.externals * Closes #7
-
- Mar 05, 2020
- May 07, 2019
-
-
AUDEBERT Nicolas authored
-
Nicolas authored
-
- Mar 26, 2019
-
-
Nicolas authored
-
- Sep 25, 2018
-
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
Mixing inference and training in the same script adds a lot of complexity. Inference capabilities are now transferred to inference.py. The main script retains training, validation and test on a predefined dataset.
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
Many papers use a sampling method where the authors extract N samples per class, with N fixed. We implement this sampling method for our datasets. The --train_size argument now follows the sklearn convention (int for absolute size, float for relative).
-
AUDEBERT Nicolas authored
Several implementation from Hu's 1D CNN are detailed in the paper and were wrong in the previous implementation: * We now use tanh activation (instead of ReLU) * We now use the SGD optimizer with lr = 0.01 (instead of Adam) * We now compute the kernel size using Hu's equation from the paper This also fixes a bug regarding the squeezing when the current batch has only 1 sample.
-
AUDEBERT Nicolas authored
The verbose forward pass was useful for detailing the network, but was tedious to implement. The summary() function now does this automatically. Remove verbose related printing in model definitions.
-
- Sep 24, 2018
-
-
AUDEBERT Nicolas authored
Users asked for a way to see a summary of the network before training. We use the torch.summary() function from the torchsummary package to provide users with a list of layers, number of parameters and network weight estimation.
-
AUDEBERT Nicolas authored
PyTorch 4.0 introduced the new device API to simplify tensor storage management. We removed all calls to the old .cuda() function and replace them with the new .to() storage management. * All functions that previously took a "cuda=" keyword argument now take a "device=" argument that expects either a torch.device object (or a 'cpu' or 'cuda' string if the object is not available). * The --cuda CLI argument now expects an integer. -1 is CPU computing (default if omitted), else it is the ordinal of the GPU on which to perform the computation.
-
- May 31, 2018
-
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
- May 30, 2018
-
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
- May 29, 2018
-
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
-
AUDEBERT Nicolas authored
Custom datasets should not be versioned and should be added externally.
-
AUDEBERT Nicolas authored
Python 2.7 compatibility.
-
AUDEBERT Nicolas authored
This allows us to relax the matplotlib dependency and use the interactive features of Visdom.
-