Official implementation of DF-SLAM: Dictionary Factors Representation for High-Fidelity Neural Implicit Dense Visual SLAM System
First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.
You can create an anaconda environment called df_slam. For linux, you need to install libopenexr-dev before creating the environment.
sudo apt-get install libopenexr-dev
conda env create -f environment.yaml
conda activate df_slamDownload the data as below and the data is saved into the ./Datasets/Replica folder.
bash scripts/download_replica.shand you can run DF_SLAM:
python -W ignore run.py configs/Replica/room0.yamlThe mesh for evaluation is saved as $OUTPUT_FOLDER/mesh/final_mesh_eval_rec_culled.ply, where the unseen and occluded regions are culled using all frames.
Please follow the data downloading procedure on ScanNet website, and extract color/depth frames from the .sens file using this code.
[Directory structure of ScanNet (click to expand)]
DATAROOT is ./Datasets by default. If a sequence (sceneXXXX_XX) is stored in other places, please change the input_folder path in the config file or in the command line.
DATAROOT
└── scannet
└── scans
└── scene0000_00
└── frames
├── color
│ ├── 0.jpg
│ ├── 1.jpg
│ ├── ...
│ └── ...
├── depth
│ ├── 0.png
│ ├── 1.png
│ ├── ...
│ └── ...
├── intrinsic
└── pose
├── 0.txt
├── 1.txt
├── ...
└── ...
Once the data is downloaded and set up properly, you can run DF_SLAM:
python -W ignore run.py configs/ScanNet/scene0000.yamlThe final mesh is saved as $OUTPUT_FOLDER/mesh/final_mesh_culled.ply.
Download the data as below and the data is saved into the ./Datasets/TUM folder.
bash scripts/download_tum.shand you can run DF_SLAM:
python -W ignore run.py configs/TUM_RGBD/freiburg1_desk.yamlThe final mesh is saved as $OUTPUT_FOLDER/mesh/final_mesh_culled.ply.
To evaluate the average trajectory error. Run the command below with the corresponding config file:
# An example for room0 of Replica
python src/tools/eval_ate.py configs/Replica/room0.yamlTo evaluate the reconstruction error, first download the ground truth Replica meshes and the files that determine the unseen regions.
bash scripts/download_replica_mesh.shThen run the cull_mesh.py with the following commands to exclude the unseen and occluded regions from evaluation.
# An example for room0 of Replica
# this code should create a culled mesh named 'room0_culled.ply'
GT_MESH=cull_replica_mesh/room0.ply
python src/tools/cull_mesh.py configs/Replica/room0.yaml --input_mesh $GT_MESHThen run the command below. The 2D metric requires rendering of 1000 depth images, which will take some time. Use -2d to enable 2D metric. Use -3d to enable 3D metric.
# An example for room0 of Replica
OUTPUT_FOLDER=output/Replica/room0
GT_MESH=cull_replica_mesh/room0_culled.ply
python src/tools/eval_recon.py --rec_mesh $OUTPUT_FOLDER/mesh/final_mesh_eval_rec_culled.ply --gt_mesh $GT_MESH -2d -3d