Lively: Enabling multimodal, lifelike, and extensible real-time robot motion

3Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Robots designed to interact with people in collaborative or social scenarios must move in ways that are consistent with the robot's task and communication goals. However, combining these goals in a naïve manner can result in mutually exclusive solutions, or infeasible or problematic states and actions. In this paper,we present Lively, a framework which supports confgurable, real-time, task-based and communicative or socially-expressive motion for collaborative and social robotics across multiple levels of programmatic accessibility. Lively supports a wide range of control methods (i.e., position, orientation, and joint-space goals), and balances them with complex procedural behaviors for natural, lifelike motion that are efective in collaborative and social contexts. We discuss the design of three levels of programmatic accessibility of Lively, including a graphical user interface for visual design called LivelyStudio, the core library Lively for full access to its capabilities for developers, and an extensible architecture for greater customizability and capability.

Cite

CITATION STYLE

APA

Schoen, A., Sullivan, D., Zhang, Z. D., Rakita, D., & Mutlu, B. (2023). Lively: Enabling multimodal, lifelike, and extensible real-time robot motion. In ACM/IEEE International Conference on Human-Robot Interaction (pp. 594–602). IEEE Computer Society. https://doi.org/10.1145/3568162.3576982

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free