Protect Trajectory Privacy in Food Delivery with Differential Privacy and Multi-agent Reinforcement Learning

3Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Today, multiple food delivery companies work globally in different regions, and this expansion could expose users’ data to danger. This data could be stored by a third party and could be used in further analysis. The stored data needs to be stored in a proper way to prevent any other from identifying the real data if this data is disclosed. This work considers this issue to maintain the data privacy of stored customer data by leveraging differential privacy and multi-agent reinforcement learning. In the beginning, the agent delivers the food to the customer. Then the agent constructs N of obfuscated trajectories with different privacy budgets. The multi-agent reinforcement learning then chooses one trajectory from the constructed trajectories. The trajectory is then evaluated by considering three factors: the similarity between the selected trajectory and the original trajectory, the sensitivity of destination location and the frequency of the number of orders by the customer. We implemented our experiment on meal delivery data sets in Iowa City, USA.

Cite

CITATION STYLE

APA

Abahussein, S., Zhu, T., Ye, D., Cheng, Z., & Zhou, W. (2023). Protect Trajectory Privacy in Food Delivery with Differential Privacy and Multi-agent Reinforcement Learning. In Lecture Notes in Networks and Systems (Vol. 655 LNNS, pp. 48–59). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-28694-0_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free