README.md 1.29 KB
Newer Older
Shichao Li's avatar
Shichao Li committed
1
2
# EgoNet

nviolante's avatar
nviolante committed
3
4
5
6
This repo is a basic annotation tool using a pre-trained EgoNet paired with a Mask R-CNN. We assume we only want to annotate a single car per image, which corresponds to the biggest 2D box detection predicted by the Mask R-CNN.

The EgoNet detections are converted to match the [EG3D](https://github.com/NVlabs/eg3d) cam2world format.

nviolante's avatar
nviolante committed
7
## How to use
nviolante's avatar
grammar    
nviolante committed
8
1. Download the pre-trained model from [here](https://drive.google.com/file/d/1JsVzw7HMfchxOXoXgvWG1I_bPRD1ierE/view) and uncompress the files into a folder named `checkpoint`
nviolante's avatar
nviolante committed
9
2. Run the annotation script
Shichao Li's avatar
Shichao Li committed
10

nviolante's avatar
nviolante committed
11
12
13
```
python annotate_dataset.py --data=<data-folder-path> --dest=<json-file-path>
```
Shichao Li's avatar
Shichao Li committed
14

nviolante's avatar
nviolante committed
15
## Visualization
VIOLANTE Nicolas's avatar
VIOLANTE Nicolas committed
16
17
18
19
20
21
22
23
We compare our cam2world matrices for a set of [car images](/camera_calib/data/).

![cars](docs/images/samples.png)

---

- All camera systems looking at one 3D box (cam2world). Includes two known cam2world matrices used by [EG3D](https://github.com/NVlabs/eg3d) to train
on the AFHQ dataset (the two cameras in the middle)
nviolante's avatar
nviolante committed
24
25
26
27
28

```
python cam2world.py
```

VIOLANTE Nicolas's avatar
VIOLANTE Nicolas committed
29
30
31
32
33
34
35
36
37
38
39
40
![cam2world](docs/images/cam2world.png)

---

- One camera system looking at all the 3D boxes (world2cam)

```
python world2cam.py
```

![world2cam](docs/images/world2cam.png)

Shichao Li's avatar
Shichao Li committed
41

nviolante's avatar
nviolante committed
42
43
## References
Please refer to the [official implementation](https://github.com/Nicholasli1995/EgoNet) for more information