ISPRS Benchmark on Multisensorial Indoor Mapping and Positioning
The ISPRS BENCHMARK ON MULTISENSORIAL INDOOR MAPPING AND POSITIONING (MiMAP) project provided a common framework for the evaluation and comparison of LiDAR-based SLAM, BIM feature extraction, and smartphone-based indoor positioning methods. Datasets are available from the this webpage and the mirror website http://mi3dmap.net/. Interested participants are invited to test their methods and submit their results for evaluation. For each of our datasets, we provide an evaluation metric and an evaluation submitting webpage. The evaluation results will be published on each dataset webpage.
Activities and Benchmark Datasets
MiMAP project team upgraded the XBeibao system (a multi-sensory backpack system developed by Xiamen University) to build the MiMAP benchmark. The upgraded system (Figure 1) can synchronously collect data with multi-beam laser scanners, fisheye cameras, and sensor records from smartphones.
Figure 1. The XBeibao System with Smartphone attached on top
The MiMAP benchmark includes three datasets:
We collected indoor point clouds dataset in three multi-floor buildings with the upgraded XBeibao. This dataset represents the typical indoor building complexity. We provide raw data of one indoor scene with ground truth for users’ evaluation. We also provide raw data of two scenes for evaluation by submitting. The evaluation criteria encompass the error to the ground truth point cloud acquired with a millimeter-level accuracy terrestrial laser scanner (TLS) (Figure 2(b)).
We established three data for evaluating the BIM feature extraction on indoor 3D point clouds. Ground truth data was manually built, and examples are presented in Figure 3.
We provide two data sequences with ground truth for users’ evaluation. We also provide three data sequences for evaluation by submitting results. The evaluation criteria encompass the error to the centimeter-level accuracy platform trajectory from the SLAM algorithm (Figure 4).
Figure 2. The Indoor LiDAR SLAM benchmark
(a) multi-beam laser scanning data and (b) the TLS reference point cloud
Figure 3. The BIM feature extraction benchmark
(a) (a-c) indoor point clouds, (d-f) the corresponding BIM frame features
Figure 4. The smartphone-based indoor positioning benchmark
(a) Setup of the attached smartphone, (b)SLAM platform trajectory as synchronize reference
Conditions of use
The download of the datasets is subjected to the following conditions:
- Datasets must be used for research purposes only. Any other use is prohibited.
- Datastes must not be distributed to third parties. Any person interested in the data may obtain them via ISPRS WG I/6.
- If the datasets are used in any publication, please acknowledge the ISPRS WG I/6 for the provision of the data, and cite the Benchmark paper:
C. Wen, Y. Dai, Y. Xia, Y. Lian, C. Wang, J. Li, Towards Efficient 3-D Colored Mapping in GPS/GNSS-denied Environments, IEEE Geoscience and Remote Sensing Letters, 17, 147-151, 2020.
C. Wang, S. Hou, C. Wen, Z. Gong, Q. Li, X. Sun, J. Li, Semantic Line Framework-based Indoor Building Modeling using Backpacked Laser Scanning Point Cloud, ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 143, pp. 150-166, 2018.
Please visit http://mi3dmap.net/ to download the data
Dr. Cheng Wang (Principal Investigator)
Xiamen University, China
Dr. Naser Elsheimy (Co-Investigator)
University of Calgary, Canada
Dr. Chenglu Wen (Co-Investigator)
Xiamen University, China
Dr. Guenther Retscher (Co-Investigator)
TU Wien - Vienna University of Technology, Austria
Dr. Zhizhong Kang (Co-Investigator)
China University of Geosciences, Beijing
Dr. Andrea Lingua (Co-Investigator)
Polytechnic University of Turin, Italy