OpenClaw agents wake up fresh every session. No built-in memory of yesterday. No recall of what you told them last week. The memory system is what changes that. It turns a stateless chatbot into an assistant that actually knows you.

I run 13 agents on OpenClaw. They manage my podcast, my newsletter, my social media, my sponsorships. Every single one relies on the memory system to function. Without it, they would ask me the same questions every morning.

Here is exactly how it works, how to set it up, and how to make sure your agent never forgets the important stuff.

How OpenClaw Memory Actually Works

OpenClaw memory is plain Markdown files in the agent workspace. That is it. No database. No proprietary format. Just .md files that your agent reads and writes.

The core philosophy: files are the source of truth. The AI model only "remembers" what gets written to disk. If it is not in a file, it does not exist after the session ends.

This sounds simple. It is simple. And that is exactly why it works so well.

Key principle: Your agent is not storing memories in some hidden vector database. Everything lives in readable, editable Markdown files inside ~/.openclaw/workspace/. You can open them, edit them, version control them, back them up. Full transparency.

When a session starts, OpenClaw loads the workspace context files (SOUL.md, USER.md, AGENTS.md) plus today and yesterday's daily memory log. That is the agent's "morning briefing." Everything else gets retrieved on demand through memory tools.

The Two Memory Layers: Daily Logs and MEMORY.md

OpenClaw uses two layers of memory files. Think of them like a journal and a reference book.

Daily Logs: memory/YYYY-MM-DD.md

These are append-only daily files. Every day gets its own file. Your agent writes what happened, decisions made, tasks completed, things to remember.

At session start, OpenClaw automatically reads today and yesterday's daily logs. This gives the agent immediate context about recent activity without loading months of history.

Example entry:

## 2026-03-14
- Published SEO article on "best ai agent framework 2026"
- Florian approved the new thumbnail template for YouTube
- Newsletter draft pushed to Notion (issue #47)
- Next podcast guest: confirmed recording March 18

MEMORY.md: Curated Long-Term Memory

This is the big one. MEMORY.md holds curated, important information that persists indefinitely. Preferences, project details, lessons learned, hard rules.

Unlike daily logs (raw notes), MEMORY.md is distilled. Think of it as the difference between your daily journal and a personal wiki. Daily logs capture what happened. MEMORY.md captures what matters.

Security note: MEMORY.md should only load in the main, private session. Never in group chats or shared contexts. It contains personal information that should not leak to other people in a Discord server or group chat.

The default workspace layout:

~/.openclaw/workspace/
├── MEMORY.md          # Long-term curated memory
├── SOUL.md            # Agent personality
├── USER.md            # Info about you
├── AGENTS.md          # Rules and conventions
└── memory/
    ├── 2026-03-14.md  # Today's log
    ├── 2026-03-13.md  # Yesterday's log
    └── ...            # Older daily logs

Memory Tools: memory_search and memory_get

OpenClaw exposes two tools for agents to interact with memory files:

memory_search does semantic recall over all indexed memory snippets. You ask a question, it finds the most relevant entries across all your memory files. Even if the wording is different from what was originally written.

memory_get does targeted reads of specific files and line ranges. When you know exactly which file and section you need, this is faster and more precise than search.

ToolWhat It DoesWhen To Use
memory_searchSemantic search across all memory files"What did we decide about pricing?"
memory_getRead specific file and line range"Read today's daily log"

Both tools handle missing files gracefully. If today's daily log does not exist yet, memory_get returns an empty result instead of crashing. Small detail, big difference for reliability.

Pro tip: Tell your agent "remember this" whenever you share something important. The agent will write it to the appropriate memory file. If you do not explicitly ask, it might not persist the information. Be direct: "Write this to MEMORY.md" works every time.

Keyword matching breaks when your agent wrote "podcast sponsor" but you ask about "ad revenue." Same concept, different words.

OpenClaw builds a small vector index over MEMORY.md and all memory/*.md files. This enables semantic queries that find related notes even when the wording differs.

The system watches memory files for changes (debounced, so rapid writes do not hammer the indexer). When you add or update a memory, the index updates automatically.

Embedding Providers

OpenClaw auto-selects an embedding provider based on what API keys you have configured:

  1. Local model if a model path is configured (zero API cost)
  2. OpenAI if an OpenAI key is available
  3. Gemini if a Gemini key is available
  4. Voyage AI if a Voyage key is available
  5. Mistral if a Mistral key is available

If you are running OpenClaw with Ollama local models, you can also run embeddings locally for zero cost. No data leaves your machine.

Configure memory search in your openclaw.json:

{
  "agents": {
    "defaults": {
      "memorySearch": {
        "provider": "openai",
        "enabled": true
      }
    }
  }
}

Automatic Memory Flush Before Compaction

This is one of the smartest features in OpenClaw's memory system. And most people do not even know it exists.

When a session gets close to auto-compaction (the context window is filling up), OpenClaw triggers a silent turn that reminds the agent to write durable memory before the context gets compacted.

Why does this matter? Compaction summarizes your conversation to free up context space. But summaries lose details. A compacted summary might say "discussed pricing strategy" when the original conversation had specific numbers, decisions, and action items.

The memory flush captures those details into daily logs before they get compressed into a summary.

The flush is configured in your openclaw.json:

{
  "agents": {
    "defaults": {
      "compaction": {
        "reserveTokensFloor": 20000,
        "memoryFlush": {
          "enabled": true,
          "softThresholdTokens": 4000
        }
      }
    }
  }
}

How it works: When the session token count crosses the threshold, OpenClaw inserts a system prompt asking the agent to persist important context. The agent writes to memory files, then the conversation compacts. Details survive because they are on disk, not just in the context window.

Memory Plugins: QMD, Mem0, Supermemory

The built-in memory system works well for most people. But if you are running a complex setup with many agents and months of accumulated memory, you might want more powerful search.

OpenClaw supports pluggable memory backends:

QMD (Local-First Hybrid Search)

QMD is a local search sidecar that combines BM25 keyword matching with vector search and reranking. Markdown stays the source of truth. OpenClaw shells out to QMD for retrieval.

Set it up with memory.backend = "qmd" in your config. Best for: power users who want better search without sending data to external APIs.

Mem0

Mem0 adds structured memory management on top of OpenClaw's file-based system. It can auto-extract and categorize memories from conversations. Available as a self-hosted plugin (the open-source Mem0 project on GitHub).

Supermemory

Supermemory is a cloud-based memory layer that provides long-term memory and recall. The openclaw-supermemory plugin on GitHub automatically remembers conversations, recalls relevant context, and builds a persistent user profile.

PluginTypeBest For
Built-in (memory-core)Local, file-basedMost users, simple setups
QMDLocal hybrid searchPower users, privacy-focused
Mem0Self-hosted or cloudStructured memory extraction
SupermemoryCloudMaximum recall accuracy

Building a Memory Architecture That Scales

After running 13 agents for months, here is what I have learned about structuring memory so it actually works at scale.

Use Multiple Memory Files, Not One Giant MEMORY.md

MEMORY.md is for curated, permanent information. But do not dump everything there. Use the memory/ folder for specialized files:

memory/
├── 2026-03-14.md       # Daily log
├── regressions.md      # Failure guardrails
├── friction-log.md     # Instruction contradictions
├── predictions.md      # Decision calibration
├── context-holds.md    # Temporary priorities with expiry
└── heartbeat-state.json # Background check tracking

Each file has a clear purpose. When your agent needs regression guardrails, it reads regressions.md. When it needs temporary priorities, it checks context-holds.md. Focused files beat a 2,000-line MEMORY.md every time.

Write Immediately, Not Later

The biggest memory mistake: planning to write things down "later." There is no later. When someone tells you something important, the agent should write it to the daily log in the same turn. Not at compaction. Not at the end of the session. Now.

Compaction summaries lose details. Daily memory files do not. Write first, act second.

Regular Memory Maintenance

Every few days, use a heartbeat or cron job to:

  1. Read through recent daily files
  2. Identify significant events worth keeping long-term
  3. Update MEMORY.md with distilled learnings
  4. Remove outdated information

Think of it like reviewing your journal and updating your mental model. Daily files are raw notes. MEMORY.md is curated wisdom.

Common Memory Mistakes and How to Fix Them

Mistake 1: Never telling the agent to remember things. The agent does not automatically persist everything you say. Be explicit: "Remember this" or "Write this to memory." If you do not say it, it might not survive the session.

Mistake 2: One massive MEMORY.md file. After months of use, a single MEMORY.md becomes a wall of text. Split into focused files. Use memory_search to find things across files.

Mistake 3: Not configuring memory search. Without an embedding provider, memory_search falls back to basic matching. Configure at least one provider (OpenAI, Gemini, or local) for semantic recall.

Mistake 4: Loading MEMORY.md in group chats. MEMORY.md contains personal context. Loading it in a Discord server or group chat leaks that information to everyone in the conversation. Keep it to private sessions only.

Mistake 5: Ignoring the memory flush setting. The pre-compaction memory flush is disabled by default in some configurations. Enable it. It saves details that would otherwise be lost during compaction.

Quick fix: Add this to your AGENTS.md or system instructions: "When I tell you something important, write it to memory/YYYY-MM-DD.md immediately. Do not wait." This single instruction dramatically improves memory reliability.

The OpenClaw memory system is not magic. It is just files. But files that are well-organized, consistently updated, and searchable become something powerful: an AI assistant that actually knows you.

If you want to see the exact memory architecture I use across 13 agents, including the SOPs, daily logs, and cron jobs that keep everything in sync, I share all of it inside OpenClaw Lab.

Ready to set up your own agent? Start at installopenclawnow.com.

OpenClaw Lab is the #1 community for founders building AI agent systems. I share the exact playbooks, skill files, and workflows inside. Weekly lives, expert AMAs, and 265+ members building real systems.

Join OpenClaw Lab →