Roof Classification, Segmentation, and Damage Completion using 3D Point Clouds

This repository contains the project for the course: https://dvl.in.tum.de/teaching/adl4cv-ws18/
We design a Deep Learning framework that directly consumes unordered point sets as inputs, representing the Roof of a building. A Point Cloud is represented as a set of 3D points {Pi| i = 1, …, n}, where each point Pi is a vector of its (x, y, z) coordinate. We perform the following tasks:
git clone https://github.com/sarthakTUM/roofn3d.gitcd roofn3d. Install the requirements: pip install -r requirements.txt. It is recommended to perform this step in a separate virtual environment. If using Conda, then conda create --name roofn3d.cd cls_seg_mt.python demo.py. Different examples can be seen by changing the --idxparameter.
For example, python demo.py --idx=15. The --idx parameter can be a maximum of 23.cd completion from the roof directory.python demo.py. Different example can be seen by changing the --input_path parameter. For example: python demo.py --input_path=demo_data/saddleback_roof.pcd, or python demo.py --input_path=demo_data/twosidedhip_roof.pcd.The models are cloned along with the repository. If there are any difficulties in cloning the models, please download the models from:
roofn3d/cls_seg_mt/models directory.completion/log directory.Non-Damaged Data: download the data from https://drive.google.com/open?id=1hYBrNl2nficT_rFJkFUlXQYjR-2QvNQF Damaged data: download the data from https://drive.google.com/open?id=1gNqYKxKP3CT_Fd8D_L6OWk545tAR4Oxl
Unzip it into a folder.
go to cd cls_seg_mt. Run python train_classification.py --input_path=path_to_data_from_step2 --outf=models/cls.
You must change the --input-pathto path of data dobtained from step 2. The outf argument corresponds to output drectory for the trained models. For example: python train_classification.py --input_path=data/roofn3d_data_multitask_all --outf=models/cls.
for segmentation, run python train_segmentation.py --input_path=path_to_data_from_step2 --outf=models/seg.
You must change the --input-pathto path of data dobtained from step 2. The outf argument corresponds to output drectory for the trained models. For example: python train_segmentation.py --input_path=data/roofn3d_data_multitask_all --outf=models/seg.
For Multi-Task Learning, run python train_multitask.py --input_path=path_to_data_from_step2 --outf=models/mt.
You must change the --input-pathto path of data dobtained from step 2. The outf argument corresponds to output drectory for the trained models. For example: python train_multitask.py --input_path=data/roofn3d_data_multitask_all --outf=models/mt.
NOTE: The first run might take some time to load all the point-clouds in the memory and save them for faster access in the next run. It is recommended to spare atleast 10GB of RAM for data loading.
go to cd cls_seg_mt, and run python test_cls.py --input_path=path_to_data --model=model_to_test.pth. For example: python test_cls.py --input_path=data/roofn3d_data_multitask_all --model=models/classification_complete.pth.
go to cd cls_seg_mt, and run python test_seg.py --input_path=path_to_data --model=model_to_test.pth. For example: python test_seg.py --input_path=data/roofn3d_data_multitask_all --model=models/segmentation_complete.pth.
go to cd cls_seg_mt, and run python test_multitask.py --input_path=path_to_data --model=model_to_test.pth. For example: python test_multitask.py --input_path=data/roofn3d_data_multitask_all --model=models/multitask_complete.pth.
The RoofN3D training data (Wichmann et al., 2018) was provided by the chair Methods of Geoinformation Science of Technische Universität Berlin and is available at https://roofn3d.gis.tu-berlin.de.
[1] Wichmann, A., Agoub, A., Kada, M., 2018. RoofN3D: Deep Learning Training Data for 3D Building Reconstruction. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2, pp. 1191-1198.
[2] Qi Charles, R & Su, Hao & Kaichun, Mo & Guibas, Leonidas. (2017). PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. 77-85. 10.1109/CVPR.2017.16.
[3] Yuan, Wentao & Khot, Tejas & Held, David & Mertz, Christoph & Hebert, Martial. (2018). PCN: Point Completion Network. 728-737. 10.1109/3DV.2018.00088.