Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan
Abstract
We introduce Tree of Thoughts (ToT), a framework that generalizes over chain-of-thought prompting and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows language models to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action.
Key Findings
- 1Generalized chain-of-thought to allow branching and backtracking reasoning
- 2Enabled LLMs to explore multiple reasoning paths and self-evaluate
- 3Achieved dramatic improvements on tasks requiring search and planning
- 4Solved the Game of 24 task with 74% success vs 4% for chain-of-thought
- 5Introduced a framework for deliberate, systematic problem solving with LLMs
Impact & Significance
Tree of Thoughts advanced the science of LLM reasoning and influenced how AI systems approach complex problem solving. The concept of search-based reasoning was adopted in production systems and research on AI planning.
Related Papers
The Llama 3 Herd of Models
Meta AI
Qwen2 Technical Report
Alibaba Cloud / Qwen Team
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
DeepSeek AI
The Claude 3 Model Family: Opus, Sonnet, and Haiku
Anthropic