Hierarchical Prompting Assists Large Language Model on Web Navigation

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Large language models (LLMs) struggle on processing complicated observations in interactive decision making tasks. To alleviate this issue, we propose a simple hierarchical prompting approach. Diverging from previous prompting approaches that always put the full observation (e.g., a web page) to the prompt, we propose to first construct an action-aware observation which is more condensed and relevant with a dedicated SUMMARIZER prompt. The ACTOR prompt then predicts the next action based on the summarized observation. While our method has broad applicability, we particularly demonstrate its efficacy in the complex domain of web navigation where a full observation often contains redundant and irrelevant information. Our approach outperforms the previous state-of-the-art prompting mechanics by 6.2% on task success rate, demonstrating its potential on interactive decision making tasks with long observation traces.

Cite

CITATION STYLE

APA

Sridhar, A., Lo, C. F., Xu, F. F., Zhu, H., & Zhou, S. (2023). Hierarchical Prompting Assists Large Language Model on Web Navigation. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 10217–10244). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.685

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free