Automated mapping of accessibility signs with deep learning from ground-level imagery and open data

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In some areas or regions, accessible parking spots are not geolocalized and therefore both difficult to find online and excluded from open data sources. In this paper, we aim at detecting accessible parking signs from street view panoramas and geolocalize them. Object detection is an open challenge in computer vision, and numerous methods exist whether based on handcrafted features or deep learning. Our method consists of processing Google Street View images of French cities in order to geolocalize the accessible parking signs on posts and on the ground where the parking spot is not available on GIS systems. To accomplish this, we rely on the deep learning object detection method called Faster R-CNN with Region Proposal Networks which has proven excellent performance in object detection benchmarks. This helps to map accurate locations of where the parking areas do exist, which can be used to build services or update online mapping services such as Open Street Map. We provide some preliminary results which show the feasibility and relevance of our approach.

Cite

CITATION STYLE

APA

Nassar, A. S., & Lefevre, S. (2019). Automated mapping of accessibility signs with deep learning from ground-level imagery and open data. In 2019 Joint Urban Remote Sensing Event, JURSE 2019. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/JURSE.2019.8808961

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free