Beyond Retrieval: Building Recursive Understanding
Recurse implements a new approach called Recursive Agentic Graph Embeddings (RAGE). Instead of treating context as flat chunks of text, RAGE parses knowledge into structured semantic frames and builds a recursive graph that evolves with every interaction. Each node on the graph carries instructions for how to interpret the node, and how to use it in a larger context.
Moving From Retrieval to Recursive Understanding
For years, AI has promised to transform how we manage knowledge, yet most systems today barely scratch the surface. They fetch, but rarely understand. They organize, but rarely contextualize. True intelligence and collaboration—whether between humans, AI systems, or a hybrid of the two—must mirror how humans genuinely process and interact with information: recursively, contextually, and associatively.
Recursive Cognition
Human cognition isn't linear—it's fractal. We interpret ideas, abstract them into summaries, reuse them in new contexts, and continuously refine our mental models. RAGE explicitly models this recursive structure.
Semantic Frames
RAGE identifies meaningful structures—called semantic frames—from raw text, building hierarchical abstractions over these frames, and making these summaries recursively available for higher-level interpretations.
Cognitive Infrastructure
RAGE is not just a tool—it's an infrastructure that aligns AI with human cognitive patterns. It provides a foundation for systems that don't merely fetch information but actively participate in understanding.
Frame Semantics: The Foundation
At the heart of RAGE lies Frame Semantics, pioneered by Charles Fillmore. This approach treats meaning as structured in frames—defined roles filled by specific elements. Unlike traditional keyword matching, frame semantics captures the relational context that gives words their meaning.
Traditional Approach
"John sold the car to Mary for $5000"
Traditional systems see: John, sold, car, Mary, $5000 as separate tokens.
Frame Semantics
Commercial_transaction
├─ Seller: John
├─ Buyer: Mary
├─ Goods: car
│ └─ ItemDetails
│ ├─ Type: vehicle
│ └─ Condition: used
└─ Money: $5000
RAGE captures the complete transactional relationship with defined roles and semantic context.
Recursive Graph Construction
Beyond individual frames, RAGE builds recursive graphs where frames nest within other frames, creating hierarchies of knowledge that mirror how concepts naturally relate. Traditional knowledge graphs store information as flat triples—RAGE builds recursive structures where complex ideas contain other complex ideas.
Traditional Knowledge Graph
Flat triples:
Paper_123 hasAuthor "Smith"
Paper_123 hasTitle "AI Research"
Paper_123 contains Claim_456
Claim_456 hasText "AI improves efficiency"
Evidence_789 supports Claim_456
Everything is stored as disconnected facts. Relationships exist but the deeper structure and meaning are lost.
RAGE's Recursive Structure
ResearchCategory "AI Productivity Studies"
└─ Summary "AI Research Overview"
├─ ClaimSupportStructure
│ ├─ Claim: "AI improves efficiency"
│ ├─ Evidence: "Study of 500 companies"
│ │ └─ SourceAttribution
│ │ ├─ Author: "MIT Research Lab"
│ │ └─ Date: "2024-03-15"
│ └─ Context: "Manufacturing sector"
└─ FutureWork "Test in healthcare"
Ideas nest naturally within other ideas, preserving the logical structure and making complex reasoning possible.
Operations as Knowledge: The Self-Instructing System
RAGE doesn't just store knowledge—it stores knowledge about how to work with knowledge. The system treats actions like "summarize this research" or "find related concepts" as structured information that can be discovered, suggested, and combined intelligently. Most importantly, the system instructs itself through these operation hints—it ingests knowledge, identifies potential actions, and then comes back to perform them later.
Traditional Approach
Fixed menu of actions:
• Summarize
• Find related
• Generate questions
Same options everywhere, regardless of content type or context.
Users get the same generic tools everywhere. The system waits for explicit commands and can't learn from patterns of use.
RAGE's Self-Instructing Operations
Context-aware suggestions:
• For a hypothesis: "Generate test cases"
• For evidence: "Find contradictions"
• For a method: "Identify limitations"
The system creates its own to-do list based on what it learns.
The system understands your content, suggests relevant actions, and creates operation hints for future processing—essentially instructing itself.
Lineage of Ideas
RAGE stands on the shoulders of giants, drawing from diverse, foundational theories:
Frame Semantics
Charles Fillmore's theory that meaning is structured in frames—defined roles filled by specific elements.
Society of Mind
Minsky's concept of knowledge as nested structures with triggers and roles.
Construction Grammar
Adele Goldberg's theory that meaning emerges from recurring relational patterns.
Discourse Representation Theory
Hans Kamp's approach to context and meaning across sentences and conversations.
Homoiconicity
Lisp's principle where data and code share structures, echoed in how RAGE treats frames as executable constructs.
Why RAGE vs Other RAG Systems?
Here's how Recurse's RAGE approach compares to traditional RAG and GraphRAG systems:
| Feature / Model | Traditional RAG | GraphRAG | RAGE via Recurse |
|---|---|---|---|
| Data Chunking | Heuristic token windows | Graph-node-level chunks | Semantic frames and slot hierarchies |
| Retrieval | Vector similarity | Graph traversal (basic) | Recursive graph traversal with slot-aware anchors |
| Context Injection | Append to prompt | Pre-chain static context | Flattened and layered semantic context |
| Schema Awareness | None | Manual node types | Automated frame-based schema generation |
| Reasoning Support | One-hop | One/two-hop | Native multi-hop + abstraction layering |
| Composability | Prompt-level only | Per-query only | Frame-level reusability + rehydration |
| Output Traceability | Poor | Some via graph | Full — every answer linked to source frames |
| Input Flexibility | Mostly text/PDF | Requires pre-structured data | Any input: Slack, email, docs, audio (via plugins) |
| Agent Integration | Hard to maintain state | Limited memory graphs | Native agent memory + semantic planning substrate |
Context isn't optional anymore
Whether you're designing agents, research platforms, or adaptive interfaces — Recurse gives you the structure and semantic depth traditional tools lack. It's the memory substrate for AI-native infrastructure.