Mentions légales du service

Skip to content
Snippets Groups Projects
Commit 94533592 authored by Martin Genet's avatar Martin Genet
Browse files

Setting up Docker image and Jupyter book build and deploy on GitHub

parent 80d85535
No related branches found
No related tags found
No related merge requests found
Pipeline #1162822 passed
......@@ -6,7 +6,8 @@
### ###
################################################################################
FROM registry.gitlab.inria.fr/mgenet/dolfin_warp-tutorials:latest
# FROM registry.gitlab.inria.fr/mgenet/dolfin_warp-tutorials:latest
FROM ghcr.io/mgenet/dolfin_warp-tutorials:latest
# Copy repo into the image, cf. https://mybinder.readthedocs.io/en/latest/tutorials/dockerfile.html. MG20230531: OMG this copies from the "build context", cf. https://stackoverflow.com/questions/73156067/where-does-the-copy-command-in-docker-copy-from; here it seems to be the repo itself.
ARG NB_USER=jovyan
......
on:
- push
jobs:
build_and_deploy_jupyter_book:
runs-on: ubuntu-latest
permissions:
id-token: write
pages: write
environment:
name: github-pages
steps:
- name: Checkout repository
uses: actions/checkout@main
- name: Build Jupyter Book
run: |
pip install -U jupyter-book
jupyter-book clean .
jupyter-book build .
- name: Upload artifact
uses: actions/upload-artifact@main
with:
name: github-pages
path: "_build/html"
- name: Deploy Jupyter Book to GitHub Pages
uses: actions/deploy-pages@main
on:
- push
jobs:
build_and_push_docker_image:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@main
- name: Prepare build
run: |
rm -rf ${{github.workspace}}/.binder
cp ${{github.workspace}}/.repo2docker/* ${{github.workspace}}/.
- name: Build and push docker image
uses: jupyterhub/repo2docker-action@master
with:
DOCKER_USERNAME: ${{github.actor}}
DOCKER_PASSWORD: ${{secrets.GITHUB_TOKEN}}
DOCKER_REGISTRY: ghcr.io
on:
- push
- delete
jobs:
sync:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@main
with:
fetch-depth: 0
- name: Sync repo
uses: wangchucheng/git-repo-sync@master
with:
target-url: https://gitlab.inria.fr/mgenet/dolfin_warp-tutorials
target-username: mgenet
target-token: ${{secrets.GITLAB_TOKEN}}
......@@ -15,7 +15,6 @@ build_docker:
tags:
- ci.inria.fr
- large
# image: registry.gitlab.inria.fr/inria-ci/docker/ubuntu:20.04
image: ubuntu:20.04
script:
- apt update; DEBIAN_FRONTEND=noninteractive apt install -y ca-certificates curl git gnupg lsb-release mercurial python3 python3-pip tzdata
......
# Welcome to the dolfin_warp tutorials!
Main library can be found at https://gitlab.inria.fr/mgenet/dolfin_warp
Main library can be found at https://github.com/mgenet/dolfin_warp
Interactive tutorials can be found at https://mgenet.gitlabpages.inria.fr/dolfin_warp-tutorials/index.html.
Interactive tutorials can be found at https://mgenet.github.io/dolfin_warp-tutorials.
# Welcome to the dolfin_warp tutorials!
Main library can be found at [https://gitlab.inria.fr/mgenet/dolfin_warp](https://gitlab.inria.fr/mgenet/dolfin_warp).
Main library can be found at [https://github.com/mgenet/dolfin_warp](https://github.com/mgenet/dolfin_warp).
Tutorials can be browsed statically but also interactively—to start a session and run the code just click on the rocket icon at the top of a tutorial page and then click on Binder.
%% Cell type:markdown id: tags:
# Generate & track synthetic images
%% Cell type:markdown id: tags:
We start with some imports…
%% Cell type:code id: tags:
``` python
import dolfin # https://fenicsproject.org
import dolfin_warp as dwarp # https://gitlab.inria.fr/mgenet/dolfin_warp
import dolfin_warp as dwarp # https://github.com/mgenet/dolfin_warp
import lib_viewer
```
%% Cell type:markdown id: tags:
## Synthetic data generation
%% Cell type:markdown id: tags:
### Image generation
%% Cell type:markdown id: tags:
Let us introduce a small tool to generate simple synthetic images.
Here we define all the images parameters.
%% Cell type:code id: tags:
``` python
# Image parameters
n_dim = 2 # dimension (2 or 3)
L = [1.]*n_dim # spatial extent
n_voxels = [100]*n_dim # spatial discretization
T = 1. # temporal extent
n_frames = 21 # temporal discretization
# Structure (i.e., object) parameters
# (For now we will consider a simple squared object.
# (For the translation case it is convenient to start with an uncentered square,
# (whereas for the other cases it is convenient if the square is centered.
structure_type = "box"; structure_Xmin = [0.1, 0.3]; structure_Xmax = [0.5, 0.7]
# structure_type = "box"; structure_Xmin = [0.3, 0.3]; structure_Xmax = [0.7, 0.7]
# Texture parameters
# ("no" means the image will be the characteristic function of the object in its current position,
# (which is the simplest model for standard (i.e., untagged) MRI images,
# (whereas "tagging" means the signal over the object will be sqrt(abs(sin(pi*x/s))*abs(sin(pi*y/s))),
# (which is the simplest model for tagged MRI images.
texture_type = "tagging"; texture_s = 0.1
# texture_type = "no"
# Noise parameters
noise_level = 0
# noise_level = 0.1
# noise_level = 0.2
# noise_level = 0.3
# Transformation parameters
# (For now we consider basis transformations of the object.
deformation_type = "translation"; deformation_Dx = 0.4; deformation_Dy = 0.
# deformation_type = "rotation"; deformation_Cx = 0.5; deformation_Cy = 0.5; deformation_Rz = 45.
# deformation_type = "compression"; deformation_Cx = 0.5; deformation_Cy = 0.5; deformation_Exx = -0.20
# deformation_type = "shear"; deformation_Cx = 0.5; deformation_Cy = 0.5; deformation_Fxy = +0.20
# Evolution parameters
# ("linear" means the object will transform linearly in time,
# (reaching its presribed deformation on the last frame.
evolution_type = "linear"
# Case
case = deformation_type
case += "-"+texture_type
if (noise_level > 0):
case += "-"+str(noise_level)
```
%% Cell type:markdown id: tags:
And then actually generate the images.
%% Cell type:code id: tags:
``` python
# Image properties
images = {
"n_dim":n_dim,
"L":L,
"n_voxels":n_voxels,
"T":T,
"n_frames":n_frames,
"data_type":"float",
"folder":"E1",
"basename":"images-"+case}
# Structure (i.e., object) properties
if (structure_type == "box"):
structure = {"type":"box", "Xmin":structure_Xmin, "Xmax":structure_Xmax}
# Texture properties
if (texture_type == "no"):
texture = {"type":"no"}
elif (texture_type == "tagging"):
texture = {"type":"tagging", "s":texture_s}
# Noise properties
if (noise_level == 0):
noise = {"type":"no"}
else:
noise = {"type":"normal", "stdev":noise_level}
# Deformation properties
if (deformation_type == "translation"):
deformation = {"type":"translation", "Dx":deformation_Dx, "Dy":deformation_Dy}
elif (deformation_type == "rotation"):
deformation = {"type":"rotation", "Cx":deformation_Cx, "Cy":deformation_Cy, "Rz":deformation_Rz}
elif (deformation_type == "compression"):
deformation = {"type":"homogeneous", "X0":deformation_Cx, "Y0":deformation_Cy, "Exx":deformation_Exx}
elif (deformation_type == "shear"):
deformation = {"type":"homogeneous", "X0":deformation_Cx, "Y0":deformation_Cy, "Fxy":deformation_Fxy}
# Evolution properties
if (evolution_type == "linear"):
evolution = {"type":"linear"}
# Generate images
dwarp.generate_images(
images=images,
structure=structure,
texture=texture,
noise=noise,
deformation=deformation,
evolution=evolution,
verbose=0)
```
%% Cell type:markdown id: tags:
We visualize them using [itkwidgets](https://github.com/InsightSoftwareConsortium/itkwidgets).
%% Cell type:code id: tags:
``` python
viewer = lib_viewer.Viewer(
images="E1/images-"+case+"_*.vti")
viewer.view()
```
%% Cell type:markdown id: tags:
### Ground truth generation
%% Cell type:markdown id: tags:
Later on we will track the object throughout the images using finite elements.
To assess the quality of the tracking, it is convenient to have a ground truth, in the form of a finite element solution over a mesh.
We now create the mesh, and generate this ground truth.
%% Cell type:markdown id: tags:
#### Mesh
%% Cell type:code id: tags:
``` python
# Mesh parameter
n_elems = 1
```
%% Cell type:code id: tags:
``` python
# Generate mesh
mesh = dolfin.RectangleMesh(
dolfin.Point(structure_Xmin),
dolfin.Point(structure_Xmax),
n_elems,
n_elems,
"crossed")
```
%% Cell type:markdown id: tags:
#### Ground truth
%% Cell type:code id: tags:
``` python
# Generate ground truth
dwarp.compute_warped_mesh(
working_folder="E1",
working_basename="ground_truth"+"-"+deformation_type,
images=images,
structure=structure,
deformation=deformation,
evolution=evolution,
mesh=mesh,
verbose=0)
```
%% Cell type:markdown id: tags:
We can now visualize the images and meshes.
(To better see the mesh on top of the images, select the "Wireframe" mode for the mesh).
%% Cell type:code id: tags:
``` python
viewer = lib_viewer.Viewer(
images="E1/images-"+case+"_*.vti",
meshes="E1/ground_truth-"+deformation_type+"_*.vtu")
viewer.view()
```
%% Cell type:markdown id: tags:
## Motion tracking
%% Cell type:markdown id: tags:
### Motion tracking
%% Cell type:markdown id: tags:
We now introduce a motion tracking tool.
Here we define all the tracking parameters.
%% Cell type:code id: tags:
``` python
# Regularization strength
# (It must be ≥ 0 and < 1.
regul_level = 0.
# Regularization type
# ("continuous-elastic" means the image term is penalized with the strain energy of the displacement field.
# ("continuous-equilibrated" means the image term is penalized with the equilibrium gap of the displacement field.
# ("discrete-equilibrated" means the image term is penalized with the discrete linear equilibrium gap of the displacement field.
# regul_type = "continuous-linear-elastic"
# regul_type = "continuous-linear-equilibrated"
regul_type = "continuous-elastic"
# regul_type = "continuous-equilibrated"
# regul_type = "discrete-simple-elastic"
# regul_type = "discrete-simple-equilibrated"
# regul_type = "discrete-linear-equilibrated"
# regul_type = "discrete-linear-equilibrated-tractions-normal"
# regul_type = "discrete-linear-equilibrated-tractions-tangential"
# regul_type = "discrete-linear-equilibrated-tractions-normal-tangential"
# regul_type = "discrete-equilibrated"
# regul_type = "discrete-equilibrated-tractions-normal"
# regul_type = "discrete-equilibrated-tractions-tangential"
# regul_type = "discrete-equilibrated-tractions-normal-tangential"
# Regularization model
# ("hooke" means the linear isotropic Hooke law (infinitesimal strain/linearized elasticity).
# ("ciarletgeymonatneohookean" means the hyperelastic potential made of Ciarlet-Geymonat & neo-Hookean terms (finite strain/nonlinear elasticity).
if any([_ in regul_type for _ in ["linear", "simple"]]):
regul_model = "hooke"
else:
regul_model = "ciarletgeymonatneohookean"
```
%% Cell type:markdown id: tags:
And then actually run the tracking.
%% Cell type:code id: tags:
``` python
# Perform tracking
dwarp.warp(
working_folder="E1",
working_basename="tracking-"+case,
#
images_folder="E1",
images_basename="images-"+case,
#
mesh=mesh,
#
regul_type=regul_type,
regul_model=regul_model,
regul_level=regul_level,
#
relax_type="constant",
tol_dU=1e-2,
#
write_VTU_files=True,
write_VTU_files_with_preserved_connectivity=True)
```
%% Cell type:markdown id: tags:
We can now visualize the images and tracked meshes.
(To better see the mesh on top of the images, select the "Wireframe" mode for the mesh).
%% Cell type:code id: tags:
``` python
viewer = lib_viewer.Viewer(
images="E1/images-"+case+"_*.vti",
meshes="E1/tracking-"+case+"_*.vtu")
viewer.view()
```
%% Cell type:markdown id: tags:
### Tracking error
%% Cell type:markdown id: tags:
We can compute a normalized tracking error by comparing the tracked displacement $\underline{U}$ and the ground truth displacement $\underline{\bar U}$:
$$
e := \dfrac{\sqrt{\frac{1}{T}\int_{t=0}^{T} \frac{1}{\left|\Omega_0\right|}\int_{\Omega_0} \left\|\underline{U} - \underline{\bar U}\right\|^2 d\Omega_0~dT}}{\sqrt{\frac{1}{T}\int_{t=0}^{T} \frac{1}{\left|\Omega_0\right|}\int_{\Omega_0} \left\|\underline{\bar U}\right\|^2 d\Omega_0~dT}}
$$
(Watch out, the meshes should match as the error is actually computed at the discrete level!)
%% Cell type:code id: tags:
``` python
dwarp.compute_displacement_error(
working_folder="E1",
working_basename="tracking-"+case,
ref_folder="E1",
ref_basename="ground_truth-"+deformation_type,
verbose=0)
```
%% Cell type:markdown id: tags:
## References
[[Claire, Hild & Roux (2004). A finite element formulation to identify damage fields: The equilibrium gap method. International Journal for Numerical Methods in Engineering, 61(2), 189–208.]](https://doi.org/10.1002/nme.1057)
[[Veress, Gullberg & Weiss (2005). Measurement of Strain in the Left Ventricle during Diastole with cine-MRI and Deformable Image Registration. Journal of Biomechanical Engineering, 127(7), 1195.]](https://doi.org/10.1115/1.2073677)
[[Réthoré, Roux & Hild (2009). An extended and integrated digital image correlation technique applied to the analysis of fractured samples: The equilibrium gap method as a mechanical filter. European Journal of Computational Mechanics.]](https://doi.org/10.3166/ejcm.18.285-306)
[[Leclerc, Périé, Roux & Hild (2010). Voxel-Scale Digital Volume Correlation. Experimental Mechanics, 51(4), 479–490.]](https://doi.org/10.1007/s11340-010-9407-6)
[[Genet, Stoeck, von Deuster, Lee & Kozerke (2018). Equilibrated Warping: Finite Element Image Registration with Finite Strain Equilibrium Gap Regularization. Medical Image Analysis, 50, 1–22.]](https://doi.org/10.1016/j.media.2018.07.007)
[[Lee & Genet (2019). Validation of Equilibrated Warping—Image Registration with Mechanical Regularization—On 3D Ultrasound Images. In Coudière, Ozenne, Vigmond & Zemzemi (Eds.), Functional Imaging and Modeling of the Heart (FIMH) (Vol. 11504, pp. 334–341). Springer International Publishing.]](https://doi.org/10.1007/978-3-030-21949-9_36)
[[Berberoğlu, Stoeck, Moireau, Kozerke & Genet (2019). Validation of Finite Element Image Registration‐based Cardiac Strain Estimation from Magnetic Resonance Images. PAMM, 19(1).]](https://doi.org/10.1002/pamm.201900418)
[[Genet (2023). Finite strain formulation of the discrete equilibrium gap principle: application to mechanically consistent regularization for large motion tracking. Comptes Rendus Mécanique, 351, 429-458.]](https://doi.org/10.5802/crmeca.228)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment