Currently it is hard to develop UAV in civil environments, being simulation the best option to develop complex UAV missions with AI. To carry out useful AI training in simulation for real-world use, it is best to do it over a similar environment as the one a real UAV will work, with realistic objects in the scene of interest (buildings, vehicles, structures, etc.). This work aims to detect, reconstruct, and extract metadata from those objects. A UAV mission was developed, which automatically detects all objects in a given area using both simulated camera and 2D LiDAR, and then performs a detailed scan of each object. Later, a reconstruct process will create a 3D model for each one of those objects, along with a geo-referenced information layer that contains the object information. If applied on reality, this mission ease bringing real content to a digital twin, thus improving, and extending the simulation capabilities. Results show great potential even with the current budget specification sensors. Additional post-processing steps could reduce the resulting artefacts in the export of 3D objects. Code, dataset, and details are available on the project page: https://danielamigo.github.io/projects/soco22/.
CITATION STYLE
Amigo, D., García, J., Molina, J. M., & Lizcano, J. (2023). UAV Simulation for Object Detection and 3D Reconstruction Fusing 2D LiDAR and Camera. In Lecture Notes in Networks and Systems (Vol. 531 LNNS, pp. 31–40). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-18050-7_4
Mendeley helps you to discover research relevant for your work.