A_structured_latent_space
This is the code of the paper A Structured Latent Space for Human Body Motion Generation published at 3DV 2022
If you use this code in your projects, please cite it using the following informations:
@inproceedings{marsot2022structured,
title={A structured latent space for human body motion generation},
author={Marsot, Mathieu and Wuhrer, Stefanie and Franco, Jean-S{\'e}bastien and Durocher, Stephane},
booktitle={2022 International Conference on 3D Vision (3DV)},
pages={557--566},
year={2022},
organization={IEEE}
}
Dependencies and downloads
For the dataa, download the AMASS dataset following the procedure given at : https://amass.is.tue.mpg.de/ , the required subfolders are : ACCAD,CMU,EKUT,Eyes_Japan_Dataset,HumanEva, KIT, MPI_HDM05,MPI_Limits, Transitions_Mocap,MPI_Mosh,SFU,TotalCapture
Please also install the SMPLH+DMPL body model, guidelines are given here : https://github.com/nghorbani/amass#body-models and put the smplh and dmpl folders in body_models/
The requirements are :
- python 3.7
- pytorch 1.7.1 (see https://pytorch.org/)
- human-body-prior (pypl version)
- psutil
- pytorch3d
- matplotlib
- dtw
To this day, there is an issue in the human_body_prior package and you will need to remove the dtype argument from the lbs function in human_body_prior.body_model (l.229)
verts, joints = lbs(betas=shape_components, pose=full_pose, v_template=self.v_template,
shapedirs=shapedirs, posedirs=self.posedirs,
J_regressor=self.J_regressor, parents=self.kintree_table[0].long(),
lbs_weights=self.weights
dtype=self.dtype)
The code was developped for Ubuntu 18.04
Data preparation
Paper data
To obtain exactly the same preprocessing as the paper use the crop.py split and specify the location of the AMASS folder. Use --mp argument to enable multiprocessing
python crop.py AMASS_FOLDER_PATH (--mp)
If you are interested in adding new data from AMASS, follow instructions below, otherwise go directly to preprocessing and normalization
Cropping on new data
To crop new AMASS data, you should specify the amass folder in a cropping configuration (configuration/cropping/crop_config.json) The cropping requires reference sequences that you need to manually extract from AMASS motions (walking or running cycles for instance).
Then run :
python segment_amass.py -o YOUR_OUTPUT_FOLDER --config YOUR_CROPPING_CONFIG (--mp)
Preprocessing and normalization
To do preprocessing, run.
python preprocess.py YOUR_TRAINING_CONFIG
YOUR_TRAINING_CONFIG gives the configuration of the model, examples of configuration can be found in configurations/training/ This script preprocesses all sequences by setting a similar initial alignement and converting to 6D rotation representation
Then to compute normalization statistics, run :
python write_normalization.py YOUR_TRAINING_CONFIG
Training the model
To train, you need to specify a training configuration and run :
python train.py YOUR_TRAINING_CONFIG
To train the model used for all motion prior evaluations use :
python train.py configurations/configuration_pretrain_64.json
python train.py configurations/configuration_finetune_64.json
To train the model used for motion completion use :
python train.py configurations/configuration_pretrain_256.json
python train.py configurations/configuration_finetune_256.json