2026 Complete Guide: OpenClaw LCM Plugin — Never Lose a Single Conversation Again
🎯 Key Takeaways (TL;DR)
- The Lossless-Claw plugin replaces OpenClaw's default context engine with a DAG-based storage system that never throws away conversation history
- Every message is persisted to SQLite and summarized into expandable nodes — you can drill back into any point of your conversation
- Setup takes under 5 minutes: install the plugin, flip one config flag, and you're running
- Cost-conscious users can route summarization through a cheaper model (e.g., Claude Haiku) while keeping the main conversation on a premium model
- This guide covers installation, configuration, architecture, agent tools, and troubleshooting — everything you need in one place
Table of Contents
- What Problem Does LCM Solve?
- Installation Walkthrough
- How the DAG Model Works
- Configuration Deep Dive
- Agent Tools: grep, describe, expand_query
- Architecture Internals
- Advantages Over Traditional Context Management
- Known Limitations
- Troubleshooting Common Issues
- FAQ
What Problem Does LCM Solve?
By default, OpenClaw uses a legacy context engine that truncates or slides old messages out of the context window as conversations grow. Once those messages are gone, the agent loses access to earlier context entirely. This is a fundamental problem for long-running projects, complex debugging sessions, or any conversation that spans days or weeks.
The Lossless-Claw plugin replaces this with a fundamentally different approach:
- Every message is persisted to a local SQLite database — nothing is ever deleted
- Old messages are summarized into a DAG (Directed Acyclic Graph) of layered summaries
- The agent can drill back into any summary to recover full details on demand
- Context assembly is budget-aware, fitting the most relevant information into the model's context window
The result: conversations that can run for hundreds or thousands of turns without the agent "forgetting" what happened earlier.
Installation Walkthrough
From npm (Recommended)
openclaw plugins install @martian-engineering/Lossless-Claw
From a Local Clone (for Development)
git clone https://github.com/Martian-Engineering/Lossless-Claw.git openclaw plugins install --link ./Lossless-Claw
Activate as the Context Engine
This step is required. Without it, the plugin loads but does not run — the default legacy engine remains active.
openclaw config set plugins.slots.contextEngine Lossless-Claw
Verify
openclaw plugins list
You should see Lossless-Claw listed as enabled, with the contextEngine slot assigned to it.
Update
openclaw plugins update @martian-engineering/Lossless-Claw # Or update all plugins at once: openclaw plugins update --all
How the DAG Model Works
The Core Insight
Traditional context management is linear: keep the latest N messages, discard the rest. LCM builds a tree instead:
Raw messages: [m1] [m2] [m3] ... [m20] [m21] ... [m40] ... [m80] ... [m100]
↓ chunk ↓ chunk ↓ chunk
Leaf (d0): [leaf_1: m1-m20] [leaf_2: m21-m40] [leaf_3: ...] [leaf_4: ...]
↓ ↓
Condensed (d1): [cond_1: leaf_1 + leaf_2] [cond_2: leaf_3 + leaf_4]
↓ ↓
Condensed (d2): [cond_3: cond_1 + cond_2]
↑
still expandable
Each node carries metadata: time range, token counts, descendant counts, and references to its sources. The agent sees summaries in the context window, and uses retrieval tools to drill into any node for full detail.
Lifecycle Hooks
The engine hooks into four points in OpenClaw's conversation flow:
| Phase | What Happens |
|---|---|
| Bootstrap | On session startup, reconciles the JSONL session file with the SQLite database. Imports any messages that appeared since the last checkpoint. |
| Assemble | Before each model call, builds the message array within the token budget: recent raw messages (the "fresh tail") plus selected summaries from the DAG. |
| After Turn | After the model responds, persists new messages and evaluates whether compaction is needed. |
| Compact | When the context exceeds the threshold, runs leaf and/or condensed summarization passes to compress older content. |
Compaction: Three Escalation Levels
Every summarization attempt follows a fallback chain to guarantee progress:
- Normal — Full-fidelity prompt, temperature 0.2, target ~1200 tokens
- Aggressive — Tighter prompt with fewer details, temperature 0.1, lower token target
- Deterministic fallback — Truncates to ~512 tokens with a
[Truncated for context management]marker
Even if the summarization model is down or returns garbage, compaction still succeeds.
Large File Handling
When a message contains a file (code paste, log dump, etc.) exceeding the largeFileTokenThreshold (default 25,000 tokens):
- The file content is extracted and stored on disk (
~/.openclaw/lcm-files/) - A ~200-token structural summary replaces the file in the message
- The agent can retrieve the full file via
lcm_describe
This prevents a single large paste from consuming the entire context window.
Configuration Deep Dive
Open your config with openclaw config edit and add settings under plugins.entries.Lossless-Claw.config:
{ "plugins": { "slots": { "contextEngine": "Lossless-Claw" }, "entries": { "Lossless-Claw": { "enabled": true, "config": { // All fields are optional — defaults are sensible } } } } }
All settings can also be overridden via environment variables (prefix LCM_, e.g. LCM_FRESH_TAIL_COUNT=32). Environment variables take highest precedence.
Key Parameters
| Parameter | Default | Description |
|---|---|---|
contextThreshold | 0.75 | Fraction of the model's context window that triggers compaction. At 0.75, compaction fires when 75% of the budget is consumed. |
freshTailCount | 20 | Number of most recent raw messages that are always included and never compacted. This is the agent's "working memory." |
incrementalMaxDepth | -1 | How deep incremental (per-turn) condensation goes. 0 = leaf passes only, 1 = one condensation level, -1 = unlimited. |
dbPath | ~/.openclaw/lcm.db | Path to the SQLite database. |
summaryModel | (session model) | Model override for summarization. Use a cheaper/faster model to reduce costs (e.g., anthropic/claude-haiku-4-5). |
expansionModel | (session model) | Model override for the lcm_expand_query sub-agent. |
largeFileTokenThreshold | 25000 | Files above this token count are externalized to disk. |
Session Filtering
| Parameter | Description |
|---|---|
ignoreSessionPatterns | Glob patterns for sessions to exclude entirely. Example: ["agent:*:cron:**"] excludes all cron sessions. |
statelessSessionPatterns | Glob patterns for sessions that can read from the database but never write. Example: ["agent:*:subagent:**"] lets sub-agents access parent context without polluting the DB. |
skipStatelessSessions | When true, stateless sessions skip all LCM persistence. |
Recommended Configurations
General use (balanced):
{ "contextThreshold": 0.75, "freshTailCount": 32, "incrementalMaxDepth": -1 }
Long-running sessions (hundreds of turns):
{ "contextThreshold": 0.8, "freshTailCount": 32, "incrementalMaxDepth": 2 }
Cost-sensitive (minimize summarization calls):
{ "contextThreshold": 0.85, "freshTailCount": 16, "summaryModel": "anthropic/claude-haiku-4-5" }
Agent Tools: grep, describe, expand_query
Once active, LCM registers three tools that the agent can call to retrieve compressed context:
lcm_grep — Fast Full-Text Search
lcm_grep({ pattern: "database migration", mode: "full_text" }) lcm_grep({ pattern: "error.*timeout", mode: "regex", scope: "messages" }) lcm_grep({ pattern: "deployment", since: "2026-03-01", limit: 20 })
- Fast (<100ms) — direct SQLite query
- Supports FTS5 when available, with automatic LIKE-based fallback for CJK text
- Scope to
messages,summaries, orboth - Filter by time range with
since/before
lcm_describe — Direct Metadata Lookup
lcm_describe({ id: "sum_abc123" }) lcm_describe({ id: "file_xyz789" })
- Fast (<100ms) — direct lookup
- For summaries: returns full content, metadata, parent/child links, source message IDs, and subtree structure
- For files: returns full file content and exploration summary
lcm_expand_query — Deep Recall via Sub-Agent
lcm_expand_query({ prompt: "What were the exact SQL migrations we discussed for the users table?", summaryIds: ["sum_abc123"] })
- Slow but powerful (~30-120 seconds) — spawns a sub-agent that traverses the DAG
- The sub-agent has read-only access scoped to the current conversation
- Access is time-limited (5-minute TTL) and automatically revoked
- Best used when
lcm_greporlcm_describeare not specific enough
When to Use Each Tool
| Need | Tool | Why |
|---|---|---|
| "Did we discuss X?" | lcm_grep | Fast keyword/regex scan |
| "What does this summary contain?" | lcm_describe | Direct metadata lookup |
| "What exactly did we decide about X three days ago?" | lcm_expand_query | Deep recall with evidence |
Architecture Internals
┌─────────────────────┐
│ OpenClaw Gateway │
└──────────┬──────────┘
│
┌────────▼────────┐
│ Agent Runtime │
└────────┬────────┘
│
┌───────────────────┼───────────────────┐
│ │ │
┌───────▼───────┐ ┌───────▼───────┐ ┌───────▼───────┐
│ Bootstrap │ │ Assemble │ │ After Turn │
│ (session sync) │ │ (build prompt)│ │ (persist + │
│ │ │ │ │ compact?) │
└───────┬───────┘ └───────┬───────┘ └───────┬───────┘
│ │ │
└──────────────────┼───────────────────┘
│
┌────────────▼────────────┐
│ SQLite Database │
│ ┌──────────────────┐ │
│ │ messages │ │
│ │ summaries (DAG) │ │
│ │ context_items │ │
│ │ large_files │ │
│ └──────────────────┘ │
└─────────────────────────┘
│
┌─────────────┼─────────────┐
│ │ │
┌─────▼─────┐ ┌────▼────┐ ┌─────▼──────┐
│ lcm_grep │ │lcm_desc │ │lcm_expand │
│ (search) │ │(inspect)│ │(sub-agent) │
└───────────┘ └─────────┘ └────────────┘
Crash Recovery
The bootstrap system tracks reconciliation progress with byte offsets and entry hashes. If OpenClaw crashes mid-session, the next startup picks up exactly where it left off — no duplicate ingestion, no lost messages.
Sub-agent Isolation
The expansion system uses scoped delegation grants with TTL and explicit revocation. Sub-agents get read-only access to exactly the conversations they need, with automatic cleanup on completion or timeout.
Advantages Over Traditional Context Management
Nothing Is Lost
Every message is persisted. Summaries link back to source messages. The agent can always recover full details through lcm_expand_query. This is fundamentally different from sliding-window truncation where old context is gone forever.
Intelligent Compression
Depth-aware summarization prompts produce different summary styles at each level:
- Leaf summaries preserve specific decisions, commands, errors, and rationale
- Mid-level summaries extract themes, key decisions, and unresolved tensions
- High-level summaries capture session arcs, major turning points, and long-term constraints
Cost Control
You can use a cheaper model for summarization (e.g., Haiku) while keeping the main conversation on a more capable model (e.g., Opus). The summaryModel and expansionModel settings make this explicit.
Crash Recovery
The bootstrap system tracks reconciliation progress with byte offsets and entry hashes. If OpenClaw crashes mid-session, the next startup picks up exactly where it left off — no duplicate ingestion, no lost messages.
Sub-agent Isolation
The expansion system uses scoped delegation grants with TTL and explicit revocation. Sub-agents get read-only access to exactly the conversations they need, with automatic cleanup on completion or timeout.
Session Filtering
Glob patterns let you exclude noisy sessions (cron jobs, heartbeats) from storage, and mark sub-agent sessions as stateless so they benefit from parent context without polluting the database.
Known Limitations
Summarization Quality Depends on the Model
The summaries are only as good as the model producing them. Using a very cheap or small model for summarization may lose nuance. Important details can be compressed away even with good models — the lcm_expand_query tool mitigates this but adds latency.
Expansion Is Slow
lcm_expand_query spawns a sub-agent, which takes 30-120 seconds. For quick recall, lcm_grep and lcm_describe are far faster but less capable. In time-sensitive workflows, the agent may skip expansion and work from summaries alone.
Storage Growth
The SQLite database grows with every message. Long-running heavy sessions (thousands of turns with large tool outputs) can produce databases in the hundreds of megabytes. Large files externalized to disk add to this. There is no built-in garbage collection or retention policy — old conversations persist indefinitely.
Single-Model Summarization
Each summarization pass uses one model call. There is no ensemble or verification step. If the model hallucinates or misinterprets context during summarization, that error propagates into the DAG and may affect future assembly.
No Cross-Session Context
Each conversation is independent in the database. LCM does not automatically share context between different sessions or agents. The allConversations flag on retrieval tools allows cross-conversation search, but there is no automatic cross-pollination during assembly.
CJK Full-Text Search Limitations
FTS5 (SQLite's full-text search engine) does not tokenize Chinese, Japanese, or Korean text well. LCM falls back to LIKE-based search for CJK queries, which is slower and less precise for large databases.
Compaction Latency
Each compaction pass requires an LLM call (typically 5-15 seconds per leaf or condensed pass). During heavy compaction, this can add noticeable delay after a turn completes. The afterTurn hook serializes compaction per-session, so it does not block other sessions.
Troubleshooting Common Issues
Plugin is installed but not active
Check that the context engine slot is set:
openclaw config get plugins.slots.contextEngine
It must return Lossless-Claw. If it returns legacy or is empty, set it:
openclaw config set plugins.slots.contextEngine Lossless-Claw
Summarization auth errors
If you see LcmProviderAuthError, the model used for summarization cannot authenticate. Check:
- Is
summaryModelset to a model you have access to? - Does the provider require a separate API key?
- Try unsetting
summaryModelto fall back to the session model.
Database location
Default: ~/.openclaw/lcm.db. Override with the dbPath config or LCM_DB_PATH environment variable.
To inspect the database directly:
sqlite3 ~/.openclaw/lcm.db ".tables" sqlite3 ~/.openclaw/lcm.db "SELECT COUNT(*) FROM messages" sqlite3 ~/.openclaw/lcm.db "SELECT id, kind, depth, token_count FROM summaries ORDER BY created_at DESC LIMIT 10"
Resetting LCM state
To start fresh (removes all persisted context):
rm ~/.openclaw/lcm.db rm -rf ~/.openclaw/lcm-files/
The database and file store will be recreated on next session startup.
🤔 FAQ
Q: Do I need to change anything in my workflow after installing LCM?
A: No. Once installed and activated, LCM runs silently in the background. Your normal conversation workflow stays exactly the same. The agent automatically manages context assembly and compaction. You only need to use the retrieval tools (lcm_grep, lcm_describe, lcm_expand_query) when you want to recall specific historical details.
Q: Will LCM slow down my conversations?
A: Minimal impact during normal conversation. You may notice a 5-15 second pause after certain turns when compaction runs — but this happens in the background and doesn't block you. The lcm_grep and lcm_describe tools are fast (<100ms). Only lcm_expand_query is slow (30-120 seconds), and that's by design.
Q: Can I use a different model for summarization to save costs?
A: Yes. Set summaryModel to a cheaper model like anthropic/claude-haiku-4-5. The main conversation can stay on Opus or Sonnet while summarization routes through Haiku. This is one of LCM's most practical cost-control features.
Q: What happens if the summarization model fails?
A: LCM uses a three-level fallback chain: Normal → Aggressive → Deterministic (truncation). Even if the summarization model is completely down, the deterministic fallback ensures compaction always succeeds.
Q: Can sub-agents write to the LCM database?
A: By default, sub-agents are stateless and read from the parent's context. You can configure statelessSessionPatterns to control which sub-agents write vs. read-only. Sub-agents never pollute the database unless explicitly configured.
Q: How does LCM handle very large code pastes?
A: Files exceeding 25,000 tokens are externalized to disk (~/.openclaw/lcm-files/) and replaced with a ~200-token structural summary. Use lcm_describe to retrieve the full file content on demand.
Q: Is my data stored locally or sent to a server?
A: All data stays local. The SQLite database and externalized files live on your machine at ~/.openclaw/. No data is sent to any external service.
Summary & Recommendations
LCM transforms OpenClaw from a forgetful chatbot into a genuine long-term memory system. If you work on complex projects, maintain ongoing conversations with an AI assistant, or simply hate losing context when discussions get long — this plugin is essential.
Start here:
- Install:
openclaw plugins install @martian-engineering/Lossless-Claw - Activate:
openclaw config set plugins.slots.contextEngine Lossless-Claw - Verify:
openclaw plugins list - Done. Your next conversation starts building the DAG.
Pro tip: For cost-sensitive setups, add "summaryModel": "anthropic/claude-haiku-4-5" to your config. Summarization calls add up over time, and Haiku handles this task well at a fraction of the cost.
For further reading:
- Lossless-claw repository
- OpenClaw plugin architecture
- Building OpenClaw plugins
- Context engine concept
This article was generated based on the official LCM plugin (Lossless Context Management) documentation. For the most up-to-date information, check the GitHub repository.