ReAct: Synergizing Reasoning and Acting in Language Models
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao
Abstract
We propose ReAct, a general paradigm that synergizes reasoning and acting in large language models. ReAct prompts LLMs to generate both verbal reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with and gather additional information from external sources.
Key Findings
- 1Introduced interleaved reasoning and action generation for LLM agents
- 2Demonstrated that combining reasoning traces with actions improves performance
- 3Showed that ReAct agents can use tools (search, calculator) effectively
- 4Outperformed chain-of-thought alone on knowledge-intensive tasks
- 5Established a foundational pattern for LLM-based autonomous agents
Impact & Significance
ReAct became the foundational paradigm for building AI agents that can reason and take actions. It directly influenced LangChain, AutoGPT, and the entire AI agent ecosystem, establishing how modern AI agents are designed.
Related Papers
The Llama 3 Herd of Models
Meta AI
Qwen2 Technical Report
Alibaba Cloud / Qwen Team
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
DeepSeek AI
The Claude 3 Model Family: Opus, Sonnet, and Haiku
Anthropic