OpenClaw agents wake up fresh every session. No built-in memory of yesterday. No recall of what you told them last week. The memory system is what changes that. It turns a stateless chatbot into an assistant that actually knows you.

I run 13 agents on OpenClaw. They manage my podcast, my newsletter, my social media, my sponsorships. Every single one relies on the memory system to function. Without it, they would ask me the same questions every morning.

Here is exactly how it works, how to set it up, and how to make sure your agent never forgets the important stuff.

How OpenClaw Memory Actually Works

OpenClaw memory is plain Markdown files in the agent workspace. That is it. No database. No proprietary format. Just .md files that your agent reads and writes.

The core philosophy: files are the source of truth. The AI model only "remembers" what gets written to disk. If it is not in a file, it does not exist after the session ends.

This sounds simple. It is simple. And that is exactly why it works so well.

Key principle: Your agent is not storing memories in some hidden vector database. Everything lives in readable, editable Markdown files inside ~/.openclaw/workspace/. You can open them, edit them, version control them, back them up. Full transparency.

When a session starts, OpenClaw loads the workspace context files (SOUL.md, USER.md, AGENTS.md) plus today and yesterday's daily memory log. That is the agent's "morning briefing." Everything else gets retrieved on demand through memory tools.

The Two Memory Layers: Daily Logs and MEMORY.md

OpenClaw uses two layers of memory files. Think of them like a journal and a reference book.

Daily Logs: memory/YYYY-MM-DD.md

These are append-only daily files. Every day gets its own file. Your agent writes what happened, decisions made, tasks completed, things to remember.

At session start, OpenClaw automatically reads today and yesterday's daily logs. This gives the agent immediate context about recent activity without loading months of history.

Example entry:

## 2026-03-14
- Published SEO article on "best ai agent framework 2026"
- Florian approved the new thumbnail template for YouTube
- Newsletter draft pushed to Notion (issue #47)
- Next podcast guest: confirmed recording March 18

MEMORY.md: Curated Long-Term Memory

This is the big one. MEMORY.md holds curated, important information that persists indefinitely. Preferences, project details, lessons learned, hard rules.

Unlike daily logs (raw notes), MEMORY.md is distilled. Think of it as the difference between your daily journal and a personal wiki. Daily logs capture what happened. MEMORY.md captures what matters.

MEMORY.md Deep Dive: Structure That Scales

Most people throw everything into MEMORY.md and end up with a 3,000-line file that the agent barely reads. Here is how to structure it so it stays useful at scale.

Organize MEMORY.md into clear sections:

# MEMORY.md

## Hard Rules (never change without Florian's approval)
- Never send emails without showing draft first
- Always use trash instead of rm
- Ship perfect, not fast

## Current Projects
- Podcast: 2 episodes/week, Monday and Thursday
- OpenClaw Lab: Skool community, $29/month
- Distribb: SEO SaaS, targeting $200K MRR

## Preferences
- Communication: Telegram first, WhatsApp for local Bali
- Model: Claude Opus for everything
- Writing: no em dashes, short paragraphs, real numbers

## Lessons Learned
- [2026-03-10] Compaction summaries lose details. Write to daily files immediately.
- [2026-03-15] Subject lines with numbers get 20% higher open rates.
- [2026-03-18] Don't run cron jobs more frequently than every 30 min unless critical.

Each section serves a different retrieval pattern. When the agent needs to make a decision, it checks Hard Rules. When it needs project context, it checks Current Projects. When it drafts content, it checks Preferences. Clean sections beat a wall of unsorted notes every time.

Memory Tiers: Trust-Scored Context

Not all memories are equal. A rule that Florian explicitly stated is more reliable than something the agent inferred from context. Tracking this distinction prevents the agent from treating guesses as facts.

Here is the tier system I use:

TierTrust ScoreSourceExpires?
Constitutional1.0Direct statement from userNever. Security rules, hard constraints.
Strategic0.9Direct statement, context-dependentRefreshed quarterly. Projects, goals, direction.
Operational0.8Observed or inferredAuto-archive after 30 days unused.
Speculative0.5-0.7External sources, unverifiedVerify within 7 days or discard.

In practice, you tag entries with their source: [trust:1.0|src:direct] for things the user explicitly said, [trust:0.7|src:observed] for things the agent inferred. When two memories contradict each other, the higher trust score wins. Simple rule, huge impact on reliability.

Security note: MEMORY.md should only load in the main, private session. Never in group chats or shared contexts. It contains personal information that should not leak to other people in a Discord server or group chat.

The default workspace layout:

~/.openclaw/workspace/
├── MEMORY.md          # Long-term curated memory
├── SOUL.md            # Agent personality
├── USER.md            # Info about you
├── AGENTS.md          # Rules and conventions
└── memory/
    ├── 2026-03-14.md  # Today's log
    ├── 2026-03-13.md  # Yesterday's log
    └── ...            # Older daily logs

Memory Tools: memory_search and memory_get

OpenClaw exposes two tools for agents to interact with memory files:

memory_search does semantic recall over all indexed memory snippets. You ask a question, it finds the most relevant entries across all your memory files. Even if the wording is different from what was originally written.

memory_get does targeted reads of specific files and line ranges. When you know exactly which file and section you need, this is faster and more precise than search.

ToolWhat It DoesWhen To Use
memory_searchSemantic search across all memory files"What did we decide about pricing?"
memory_getRead specific file and line range"Read today's daily log"

Both tools handle missing files gracefully. If today's daily log does not exist yet, memory_get returns an empty result instead of crashing. Small detail, big difference for reliability.

Pro tip: Tell your agent "remember this" whenever you share something important. The agent will write it to the appropriate memory file. If you do not explicitly ask, it might not persist the information. Be direct: "Write this to MEMORY.md" works every time.

Keyword matching breaks when your agent wrote "podcast sponsor" but you ask about "ad revenue." Same concept, different words.

OpenClaw builds a small vector index over MEMORY.md and all memory/*.md files. This enables semantic queries that find related notes even when the wording differs.

The system watches memory files for changes (debounced, so rapid writes do not hammer the indexer). When you add or update a memory, the index updates automatically.

Embedding Providers

OpenClaw auto-selects an embedding provider based on what API keys you have configured:

  1. Local model if a model path is configured (zero API cost)
  2. OpenAI if an OpenAI key is available
  3. Gemini if a Gemini key is available
  4. Voyage AI if a Voyage key is available
  5. Mistral if a Mistral key is available

If you are running OpenClaw with Ollama local models, you can also run embeddings locally for zero cost. No data leaves your machine.

Configure memory search in your openclaw.json:

{
  "agents": {
    "defaults": {
      "memorySearch": {
        "provider": "openai",
        "enabled": true
      }
    }
  }
}

Automatic Memory Flush Before Compaction

This is one of the smartest features in OpenClaw's memory system. And most people do not even know it exists.

When a session gets close to auto-compaction (the context window is filling up), OpenClaw triggers a silent turn that reminds the agent to write durable memory before the context gets compacted.

Why does this matter? Compaction summarizes your conversation to free up context space. But summaries lose details. A compacted summary might say "discussed pricing strategy" when the original conversation had specific numbers, decisions, and action items.

The memory flush captures those details into daily logs before they get compressed into a summary.

The flush is configured in your openclaw.json:

{
  "agents": {
    "defaults": {
      "compaction": {
        "reserveTokensFloor": 20000,
        "memoryFlush": {
          "enabled": true,
          "softThresholdTokens": 4000
        }
      }
    }
  }
}

How it works: When the session token count crosses the threshold, OpenClaw inserts a system prompt asking the agent to persist important context. The agent writes to memory files, then the conversation compacts. Details survive because they are on disk, not just in the context window.

Memory Plugins: QMD, Mem0, Supermemory

The built-in memory system works well for most people. But if you are running a complex setup with many agents and months of accumulated memory, you might want more powerful search.

OpenClaw supports pluggable memory backends:

QMD (Local-First Hybrid Search)

QMD is a local search sidecar that combines BM25 keyword matching with vector search and reranking. Markdown stays the source of truth. OpenClaw shells out to QMD for retrieval.

Set it up with memory.backend = "qmd" in your config. Best for: power users who want better search without sending data to external APIs.

Mem0

Mem0 adds structured memory management on top of OpenClaw's file-based system. It can auto-extract and categorize memories from conversations. Available as a self-hosted plugin (the open-source Mem0 project on GitHub).

Supermemory

Supermemory is a cloud-based memory layer that provides long-term memory and recall. The openclaw-supermemory plugin on GitHub automatically remembers conversations, recalls relevant context, and builds a persistent user profile.

PluginTypeBest For
Built-in (memory-core)Local, file-basedMost users, simple setups
QMDLocal hybrid searchPower users, privacy-focused
Mem0Self-hosted or cloudStructured memory extraction
SupermemoryCloudMaximum recall accuracy

Building a Memory Architecture That Scales

After running 13 agents for months, here is what I have learned about structuring memory so it actually works at scale.

Use Multiple Memory Files, Not One Giant MEMORY.md

MEMORY.md is for curated, permanent information. But do not dump everything there. Use the memory/ folder for specialized files:

memory/
├── 2026-03-14.md       # Daily log
├── regressions.md      # Failure guardrails
├── friction-log.md     # Instruction contradictions
├── predictions.md      # Decision calibration
├── context-holds.md    # Temporary priorities with expiry
└── heartbeat-state.json # Background check tracking

Each file has a clear purpose. When your agent needs regression guardrails, it reads regressions.md. When it needs temporary priorities, it checks context-holds.md. Focused files beat a 2,000-line MEMORY.md every time.

Write Immediately, Not Later

The biggest memory mistake: planning to write things down "later." There is no later. When someone tells you something important, the agent should write it to the daily log in the same turn. Not at compaction. Not at the end of the session. Now.

Compaction summaries lose details. Daily memory files do not. Write first, act second.

Regular Memory Maintenance

Every few days, use a heartbeat or cron job to:

  1. Read through recent daily files
  2. Identify significant events worth keeping long-term
  3. Update MEMORY.md with distilled learnings
  4. Remove outdated information

Think of it like reviewing your journal and updating your mental model. Daily files are raw notes. MEMORY.md is curated wisdom.

The Daily Files System in Detail

Daily files (memory/YYYY-MM-DD.md) are the backbone of short-term recall. Here are the patterns that make them work best:

OpenClaw reads today's and yesterday's daily files at session start. It does not automatically read older files. This is intentional. Loading 30 days of daily logs would consume too much context window. Instead, the agent uses memory_search to query older files when needed.

For archiving, I run a monthly cron job that moves daily files older than 90 days into an archive/ folder. They are still searchable via memory_search but do not clutter the main memory directory.

Meta-Learning Architecture

Beyond basic memory, I built a meta-learning system using specialized memory files. Each file serves as a feedback loop that makes the agent smarter over time.

regressions.md: Every significant failure becomes a one-line rule. "Never post to X without approval." "Always double-check timezone offsets in cron jobs." The agent reads this file at boot, so it carries failure lessons from one session to the next. Think of it as a growing list of guardrails.

friction-log.md: When the agent receives contradictory instructions, it logs the conflict here instead of silently picking one. Example: "SOUL.md says 'be concise' but the newsletter SOP says 'write 500+ words.' Which takes priority for newsletter drafts?" The human reviews and resolves the conflict. The resolution becomes a permanent rule.

predictions.md: Before major decisions, the agent writes a prediction: "I predict the subject line with a number will get 5%+ higher open rate." After the result is in, the agent fills in the actual outcome. Over time, this calibrates the agent's confidence. If it consistently overestimates open rates, that bias becomes visible and correctable.

context-holds.md: Temporary priorities with expiry dates. "Until March 31: prioritize sponsor outreach over content creation." After the date passes, the hold expires automatically. This prevents stale priorities from lingering in MEMORY.md forever.

Together, these files create a system that does not just remember what happened. It learns from mistakes, surfaces contradictions, calibrates confidence, and manages temporary context. That is what separates a basic memory setup from an architecture that improves over months of use.

Common Memory Mistakes and How to Fix Them

Mistake 1: Never telling the agent to remember things. The agent does not automatically persist everything you say. Be explicit: "Remember this" or "Write this to memory." If you do not say it, it might not survive the session.

Mistake 2: One massive MEMORY.md file. After months of use, a single MEMORY.md becomes a wall of text. Split into focused files. Use memory_search to find things across files.

Mistake 3: Not configuring memory search. Without an embedding provider, memory_search falls back to basic matching. Configure at least one provider (OpenAI, Gemini, or local) for semantic recall.

Mistake 4: Loading MEMORY.md in group chats. MEMORY.md contains personal context. Loading it in a Discord server or group chat leaks that information to everyone in the conversation. Keep it to private sessions only.

Mistake 5: Ignoring the memory flush setting. The pre-compaction memory flush is disabled by default in some configurations. Enable it. It saves details that would otherwise be lost during compaction.

Quick fix: Add this to your AGENTS.md or system instructions: "When I tell you something important, write it to memory/YYYY-MM-DD.md immediately. Do not wait." This single instruction dramatically improves memory reliability.

The OpenClaw memory system is not magic. It is just files. But files that are well-organized, consistently updated, and searchable become something powerful: an AI assistant that actually knows you.

If you want to see the exact memory architecture I use across 13 agents, including the SOPs, daily logs, and cron jobs that keep everything in sync, I share all of it inside OpenClaw Lab.

Ready to set up your own agent? Start at installopenclawnow.com.

Frequently Asked Questions

How does the OpenClaw memory system work?

OpenClaw uses markdown files for persistent memory. MEMORY.md stores long-term context, daily files (memory/YYYY-MM-DD.md) store session logs, and workspace files hold project-specific knowledge. The agent reads these files at the start of each session to maintain continuity.

Can OpenClaw agents remember things between sessions?

Yes, OpenClaw agents maintain memory across sessions through markdown files. The agent writes important information to memory files during conversations and reads them back when starting a new session. This creates persistent context that survives restarts.

What is MEMORY.md in OpenClaw?

MEMORY.md is the long-term memory file for your OpenClaw agent. It contains curated information about preferences, past decisions, ongoing projects, and important context. The agent reads it every session and updates it with new learnings.

How do I improve my OpenClaw agent's memory?

Tell your agent to write important things down immediately. Set up daily memory files for session logs and keep MEMORY.md updated with long-term context. Use a heartbeat schedule to periodically review and consolidate memory across files.

Does OpenClaw have vector memory or RAG?

OpenClaw uses file-based memory by default, which is simple and transparent. For advanced use cases, you can add vector memory or RAG through skills and custom scripts. The built-in Lossless Context Management (LCM) system handles context compaction automatically.

OpenClaw Lab is the #1 community for founders building AI agent systems. I share the exact playbooks, skill files, and workflows inside. Weekly lives, expert AMAs, and 260+ founders building real systems.

Join OpenClaw Lab →