According to Clark's seminal work on common ground and grounding, participants collaborating in a joint activity rely on their shared information, known as common ground, to perform that activity successfully, and continually align and augment this information during their collaboration. Similarly, teams of human and artificial agents require common ground to successfully participate in joint activities. Indeed, without appropriate information being shared, using agent autonomy to reduce the workload on humans may actually increase workload as the humans seek to understand why the agents are behaving as they are. While many researchers have identified the importance of common ground in artificial intelligence, there is no precise definition of common ground on which to build the foundational aspects of multi-agent collaboration. In this paper, building on previously-defined modal logics of belief, we present logic definitions for four different types of common ground. We define modal logics for three existing notions of common ground and introduce a new notion of common ground, called salient common ground. Salient common ground captures the common ground of a group participating in an activity and is based on the common ground that arises from that activity as well as on the common ground they shared prior to the activity. We show that the four definitions share some properties, and our analysis suggests possible refinements of the existing informal and semi-formal definitions.
CITATION STYLE
Miller, T., Pfau, J., Sonenberg, L., & Kashima, Y. (2017). Logics of common ground. Journal of Artificial Intelligence Research, 58, 859–904. https://doi.org/10.1613/jair.5381
Mendeley helps you to discover research relevant for your work.