AgentsOctober 6, 2022Princeton University / Google Brain

ReAct: Synergizing Reasoning and Acting in Language Models

Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao

Abstract

We propose ReAct, a general paradigm that synergizes reasoning and acting in large language models. ReAct prompts LLMs to generate both verbal reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with and gather additional information from external sources.

Key Findings

  • 1Introduced interleaved reasoning and action generation for LLM agents
  • 2Demonstrated that combining reasoning traces with actions improves performance
  • 3Showed that ReAct agents can use tools (search, calculator) effectively
  • 4Outperformed chain-of-thought alone on knowledge-intensive tasks
  • 5Established a foundational pattern for LLM-based autonomous agents

Impact & Significance

ReAct became the foundational paradigm for building AI agents that can reason and take actions. It directly influenced LangChain, AutoGPT, and the entire AI agent ecosystem, establishing how modern AI agents are designed.

Read Full Paper