ReaGAN: Empowering Each Node as an Intelligent Reasoning Expert in Graphs

The paper (ReaGAN: Node-as-Agent-Reasoning Graph Agentic Network) introduces ReaGAN, a graph learning framework that redefines each node in a graph as an autonomous agent with the ability to plan, reason, and act, achieved through a frozen large language model.

Innovations

ReaGAN departs from traditional static layer-by-layer message passing mechanisms, achieving node-level autonomy. Each node can independently decide whether to aggregate information from local neighbors, retrieve semantically similar but distant nodes, or take no action at all.

Image

Advantages

This node-as-agent abstraction addresses two key challenges in graph learning: (1) handling the variability in node information value, and (2) combining local structural and global semantic information.

Core Modules

Image

Each node operates in a multi-step loop, comprising four core modules: Memory, Planning, Action, and Tool Use (RAG).

Nodes construct natural language prompts from their memory, query a frozen large language model (e.g., Qwen2-14B) for their next action, execute these actions, and update their memory accordingly.

Memory –> Prompt

ReaGAN constructs prompts via node memory: combining raw text features, aggregated local/global summaries, and selected annotated neighbor examples, enabling the large language model to plan actions or make predictions based on rich multi-scale personalized context.

Image

Experimental Results

ReaGAN performs excellently on node classification tasks, without requiring any fine-tuning.

On datasets like Cora and Chameleon, despite using only a frozen large language model, its performance is comparable to or even better than traditional Graph Neural Networks, fully demonstrating the power of structured prompting and retrieval-based reasoning.

Image

Ablation Study

Both the agentic planning mechanism and global semantic retrieval are crucial.

Removing either component (e.g., forcing a fixed action plan or disabling RAG) leads to a significant drop in accuracy, especially on sparse graphs like Citeseer.

Image

Main Tag:Artificial Intelligence

Sub Tags:Graph Neural NetworksMachine LearningAgent-based AILarge Language Models


Previous:Google's Challenge: DeepSeek, Kimi and More to Compete in First Large Model Showdown Starting Tomorrow

Next:Can Models Truly "Reflect on Code"? Beihang University Releases Repository-Level Understanding and Generation Benchmark, Refreshing the LLM Understanding Evaluation Paradigm

Share Short URL