Autonomous Topological Reasoning with Large Language Models

Arxiv, 2024

Topological reasoning with Large Language Models (LLMs) emulates human-like thought processes by efficiently exploring various reasoning paths over thought trees or graphs. However, prior works rely on static, task-specific prompting schedules to decompose problems and synthesize solutions, which are subject to a hyperparameter set requiring extensive search for high query efficiency. Additionally, the task-specific requirement restricts generalization to novel problem domains and fails to adapt to varying problem complexities. In our work, we investigate the ability of LLMs to guide thought graph exploration in a multi-agent architecture with policy agents and reasoning agents. While reasoning agents solve decomposed subproblems, LLM policy agents maintain visibility of the reasoning trace, dynamically adaptating the problem-solving strategy. Through extensive controlled experiments, we observe that in problems with low decomposition depth, LLM-guided exploration can match or even outperform static schedules by up to 3.3x, without any search time required. At high decomposition depths, existing LLMs incur performance deterioration of up to 4.4s, highlighting the requirement for new solutions to enable more flexible and generalizable topological reasoning with LLMs.