Data replication is a very relevant technique for improving performance, availability, and scalability. These are requirements of many applications such as multiplayer distributed games, cooperative software tools, etc. However, consistency of the replicated shared state is hard to ensure. Current consistency models and middleware systems lack the required adaptability and efficiency. Thus, developing such robust applications is still a daunting task. We propose a new consistency model, named Vector-Field Consistency (VFC) that unifies (i) several forms of consistency enforcement and a multidimensional criteria (time, sequence, and value) to limit replica divergence with (ii) techniques based on locality-awareness (w.r.t. players position). Based on the VFC model, we propose a generic metaarchitecture that can be easily instantiated both to centralized and (dynamically) partitioned architectures: (i) a single central server in which the VFC algorithm runs or (ii) a set of servers in which each one is responsible for a slice of the data being shared. The first approach is clearly more adapted to ad hoc networks of resource-constrained devices, while the second, being more scalable, is well adapted to large-scale networks. We developed and evaluated two prototypes of VFC (for ad hoc and large-scale networks) with very good performance results. © The Brazilian Computer Society 2010.
CITATION STYLE
Veiga, L., Negrão, A., Santos, N., & Ferreira, P. (2010). Unifying divergence bounding and locality awareness in replicated systems with vector-field consistency. Journal of Internet Services and Applications, 1(2), 95–115. https://doi.org/10.1007/s13174-010-0011-x
Mendeley helps you to discover research relevant for your work.