# Mnemoverse Documentation — Full Content This file contains the complete Mnemoverse API documentation in plain text. AI agents can fetch this file to get all information needed to integrate with Mnemoverse. Generated: 2026-04-15T18:10:05.612Z Source: https://mnemoverse.com/docs/ --- --- URL: https://mnemoverse.com/docs/api/getting-started # Getting Started Get your first memory stored and queried in 2 minutes. ## 1. Get Your API Key Sign up at [console.mnemoverse.com](https://console.mnemoverse.com) to get a free API key. Free tier: 1,000 queries/day, no credit card required. Your API key starts with `mk_` and looks like: `mk_live_a1b2c3d4e5f6...` ## 2. Store a Memory ```bash curl -X POST https://core.mnemoverse.com/api/v1/memory/write \ -H "X-Api-Key: mk_live_YOUR_KEY" \ -H "Content-Type: application/json" \ -d '{ "content": "Retry with exponential backoff fixed the timeout issue", "concepts": ["retry", "backoff", "timeout"], "domain": "engineering" }' ``` Response: ```json { "stored": true, "atom_id": "550e8400-e29b-41d4-a716-446655440000", "importance": 0.85, "reason": "novel insight — high knowledge delta" } ``` The importance gate automatically filters noise. If your memory is too similar to existing ones, it won't be stored (and `stored` will be `false`). ## 3. Query Memories ```bash curl -X POST https://core.mnemoverse.com/api/v1/memory/read \ -H "X-Api-Key: mk_live_YOUR_KEY" \ -H "Content-Type: application/json" \ -d '{ "query": "how to handle timeouts?", "top_k": 5 }' ``` Response: ```json { "items": [ { "atom_id": "550e8400-e29b-41d4-a716-446655440000", "content": "Retry with exponential backoff fixed the timeout issue", "relevance": 0.92, "similarity": 0.87, "valence": 0.0, "importance": 0.85, "source": "semantic", "concepts": ["retry", "backoff", "timeout"], "domain": "engineering", "metadata": {} } ], "episodic_hit": false, "query_concepts": ["timeout", "handling"], "expanded_concepts": ["timeout", "handling", "retry", "backoff"], "search_time_ms": 12.5 } ``` Notice `expanded_concepts` — Hebbian associations automatically expanded "timeout" to include "retry" and "backoff" based on learned connections. ## 4. Report Outcomes When a memory was useful (or not), report it: ```bash curl -X POST https://core.mnemoverse.com/api/v1/memory/feedback \ -H "X-Api-Key: mk_live_YOUR_KEY" \ -H "Content-Type: application/json" \ -d '{ "atom_ids": ["550e8400-e29b-41d4-a716-446655440000"], "outcome": 1.0, "query_concepts": ["timeout", "handling"] }' ``` This does three things: 1. Updates the memory's **valence** (outcome polarity) — future queries rank it higher 2. Strengthens **Hebbian edges** between concepts — "timeout" and "retry" become more associated 3. Creates **co-activation links** between query concepts and result concepts Over time, the system learns which memories are useful for which queries. ## 5. Check Stats ```bash curl https://core.mnemoverse.com/api/v1/memory/stats \ -H "X-Api-Key: mk_live_YOUR_KEY" ``` ```json { "total_atoms": 1, "episodes": 1, "prototypes": 0, "singletons": 0, "hebbian_edges": 3, "episodic_fingerprints": 0, "domains": ["engineering"], "avg_valence": 0.8, "avg_importance": 0.85 } ``` ## Next Steps - [Claude Code & Desktop](/api/claude) — give Claude persistent memory - [Cursor, VS Code & Windsurf](/api/editors) — editor integrations - [ChatGPT](/api/chatgpt) — give any Custom GPT persistent memory - [Python SDK](/api/python-sdk) — `pip install mnemoverse` for scripts and backends - [API Reference](/api/reference) — full endpoint documentation - [Security](/api/security) — how we isolate and protect your data --- URL: https://mnemoverse.com/docs/api/reference # API Reference Base URL: `https://core.mnemoverse.com/api/v1` All endpoints require authentication via `X-Api-Key` header or `Authorization: Bearer` header. ## Authentication Include your API key in every request: ```bash # Option 1: X-Api-Key header (recommended) curl -H "X-Api-Key: mk_live_YOUR_KEY" ... # Option 2: Bearer token curl -H "Authorization: Bearer mk_live_YOUR_KEY" ... ``` ## Endpoints ### POST /memory/write Store a single memory atom. **Request:** | Field | Type | Required | Description | | --- | --- | --- | --- | | `content` | string | Yes | The insight, pattern, or lesson to remember (1-10,000 chars) | | `concepts` | string[] | No | Key concepts for Hebbian associations | | `domain` | string | No | Namespace: `general`, `user:X`, `project:Z` (default: `general`) | | `metadata` | object | No | Arbitrary key-value metadata | | `external_ref` | string | No | Client-provided unique reference for idempotent writes | **Response:** | Field | Type | Description | | --- | --- | --- | | `stored` | boolean | True if the atom passed the importance gate | | `atom_id` | UUID | UUID of the stored atom, or null if filtered | | `importance` | float | Computed importance score [0, 1] | | `reason` | string | Why stored or filtered | --- ### POST /memory/write-batch Store up to 500 atoms in one request. **Request:** | Field | Type | Required | Description | | --- | --- | --- | --- | | `items` | WriteRequest[] | Yes | Array of write requests (1-500) | **Response:** | Field | Type | Description | | --- | --- | --- | | `total_count` | int | Total atoms processed | | `stored_count` | int | Atoms that passed importance gate | | `results` | object[] | Per-atom results with `index`, `stored`, `atom_id`, `importance`, `error` | --- ### POST /memory/read Query memory with semantic search + Hebbian expansion. **Request:** | Field | Type | Required | Description | | --- | --- | --- | --- | | `query` | string | Yes | Natural language query (1-5,000 chars) | | `top_k` | int | No | Max results (1-100, default: 10) | | `domain` | string | No | Filter by domain (null = all) | | `min_relevance` | float | No | Minimum relevance threshold (0-1, default: 0.3) | | `include_associations` | bool | No | Expand via Hebbian associations (default: true) | | `concepts` | string[] | No | Concept hints to bias search | **Response:** | Field | Type | Description | | --- | --- | --- | | `items` | MemoryItem[] | Matching memories, ordered by relevance | | `episodic_hit` | boolean | True if exact fingerprint match found | | `query_concepts` | string[] | Concepts extracted from query | | `expanded_concepts` | string[] | Concepts after Hebbian expansion | | `search_time_ms` | float | Search duration in milliseconds | **MemoryItem fields:** | Field | Type | Description | | --- | --- | --- | | `atom_id` | UUID | Unique identifier | | `content` | string | Stored text content | | `relevance` | float | Final score (similarity * valence modulation) | | `similarity` | float | Raw cosine similarity | | `valence` | float | Outcome polarity [-1, +1] | | `importance` | float | Importance score [0, 1] | | `source` | string | Hit source: `episodic`, `semantic`, or `hebbian` | | `concepts` | string[] | Associated concepts | | `domain` | string | Domain namespace | | `metadata` | object | Arbitrary metadata | --- ### POST /memory/read-batch Batch query up to 50 queries in one request. **Request:** | Field | Type | Required | Description | | --- | --- | --- | --- | | `queries` | ReadRequest[] | Yes | Array of read requests (1-50) | **Response:** | Field | Type | Description | | --- | --- | --- | | `results` | ReadResponse[] | Per-query results | --- ### POST /memory/query Advanced query with multi-domain and metadata filtering. **Request:** | Field | Type | Required | Description | | --- | --- | --- | --- | | `query` | string | Yes | Natural language query | | `domains` | string[] | No | Filter by multiple domains | | `metadata_filter` | Filter[] | No | JSONB metadata conditions (`eq`, `contains`, `in`) | | `top_k` | int | No | Max results (default: 10) | | `min_relevance` | float | No | Min relevance (default: 0.3) | | `include_associations` | bool | No | Hebbian expansion (default: true) | | `concepts` | string[] | No | Concept hints | --- ### POST /memory/feedback Report outcome (success/failure) for memories. Updates valence and Hebbian associations. **Request:** | Field | Type | Required | Description | | --- | --- | --- | --- | | `atom_ids` | UUID[] | Yes | Atoms to update | | `outcome` | float | Yes | Outcome signal: -1.0 (failure) to +1.0 (success) | | `concepts` | string[] | No | Concepts to reinforce | | `query_concepts` | string[] | No | Original query concepts (enables co-activation learning) | | `domain` | string | No | Domain for the feedback | **Response:** | Field | Type | Description | | --- | --- | --- | | `updated_count` | int | Number of atoms updated | | `avg_valence` | float | Average valence after update | | `coactivation_edges` | int | Hebbian edges created/updated | --- ### POST /memory/consolidate Trigger sleep consolidation. Clusters similar memories into prototypes, protects distinctive ones as singletons. **Request:** | Field | Type | Required | Description | | --- | --- | --- | --- | | `domain` | string | No | Domain to consolidate (null = general) | **Response:** | Field | Type | Description | | --- | --- | --- | | `domain` | string | Domain consolidated | | `atoms_before` | int | Atom count before | | `atoms_after` | int | Atom count after | | `prototypes_created` | int | New prototype atoms | | `singletons_protected` | int | Von Restorff protected atoms | | `compression_ratio` | float | atoms_before / atoms_after | | `duration_ms` | float | Duration in milliseconds | --- ### GET /memory/stats Get memory statistics for your tenant. **Response:** | Field | Type | Description | | --- | --- | --- | | `total_atoms` | int | Total atoms stored | | `episodes` | int | Episode-type atoms | | `prototypes` | int | Prototype atoms (from consolidation) | | `singletons` | int | Protected singleton atoms | | `hebbian_edges` | int | Concept-concept association edges | | `domains` | string[] | Active domain names | | `avg_valence` | float | Average outcome valence | | `avg_importance` | float | Average importance score | --- ### GET /health Liveness probe. No authentication required. **Response:** `{"status": "ok", "database": true, "version": "1.0.0"}` ### GET /health/ready Deep readiness check (database + engine + embedding model). No authentication required. **Response:** `{"ready": true, "checks": {"database": true, "engine": true, "embedding": true}, "version": "1.0.0"}` --- ## Error Responses All errors follow a consistent format: ```json { "code": "UNAUTHORIZED", "message": "Invalid or missing API key", "requestId": "req_abc123", "retryable": false, "details": null } ``` | HTTP Status | Code | Description | | --- | --- | --- | | 401 | UNAUTHORIZED | Invalid or missing API key | | 403 | FORBIDDEN | API key lacks permission | | 429 | RATE_LIMITED | Too many requests. Check `Retry-After` header | | 422 | VALIDATION_ERROR | Invalid request body | | 500 | INTERNAL_ERROR | Server error (retryable) | ## Rate Limits Rate limits depend on your plan: | Plan | Requests/min | Queries/day | Atoms | | --- | --- | --- | --- | | Free | 60 | 1,000 | 10,000 | | Pro | 600 | 50,000 | 500,000 | | Team | 3,000 | 500,000 | 5,000,000 | Rate limit headers are included in every response: ``` X-RateLimit-Limit: 60 X-RateLimit-Remaining: 58 X-RateLimit-Reset: 1712345678 ``` --- URL: https://mnemoverse.com/docs/api/claude # Claude Code & Claude Desktop ::: tip For AI Agents Machine-readable docs: [mnemoverse/llms.txt](https://raw.githubusercontent.com/mnemoverse/.github/main/llms.txt) (works in all sandboxes) ::: Give Claude persistent memory. One command — it never forgets again. That's it. Claude Code now has long-term memory across all sessions. **Get your API key** at [console.mnemoverse.com](https://console.mnemoverse.com) (free tier, no credit card). ## Claude Desktop Add to `~/Library/Application Support/Claude/claude_desktop_config.json`: > ⚠️ **Restart Claude Desktop** after saving. MCP servers are only picked up on client startup. ## Try It Say to Claude: > "Remember that I always deploy to Railway using `railway up`" Then in a new session: > "Deploy this project" Claude remembers. ## Tools The MCP server gives Claude 6 tools: ### memory_write — Store a memory When Claude learns something worth keeping — a preference, a lesson, a decision — it stores it. ``` You: "I prefer Tailwind over CSS modules" Claude: [calls memory_write] content: "User prefers Tailwind CSS over CSS modules for styling" concepts: ["tailwind", "css", "styling", "preferences"] → Stored (importance: 0.82) ``` | Name | Type | Required | Description | |------|------|----------|-------------| | `content` | string | Yes | What to remember (1-10,000 chars) | | `concepts` | string[] | No | Key concepts for linking memories | | `domain` | string | No | Namespace: `"engineering"`, `"user:alice"` | ### memory_read — Recall memories Before starting a task, Claude checks what it already knows. ``` You: "Set up the database" Claude: [calls memory_read] query: "database setup preferences and history" → 1. [92%] Project uses PostgreSQL 15 + Prisma ORM (engineering) 2. [87%] Always run migrations with --create-only first (lessons) 3. [71%] DB hosted on Supabase, connection string in .env.local ``` | Name | Type | Required | Description | |------|------|----------|-------------| | `query` | string | Yes | Natural language search (1-5,000 chars) | | `top_k` | integer | No | Max results (default: 5, max: 50) | | `domain` | string | No | Filter by namespace | ### memory_feedback — Rate memories After using a memory, Claude reports whether it helped. Good memories surface faster next time. ``` Claude: [calls memory_feedback] atom_ids: ["550e8400-..."] outcome: 1.0 // Very helpful! → Feedback recorded for 1 memory. ``` | Name | Type | Required | Description | |------|------|----------|-------------| | `atom_ids` | string[] | Yes | Memory IDs from read results | | `outcome` | number | Yes | -1.0 (harmful) to 1.0 (very helpful) | ### memory_stats — Check status ``` Claude: [calls memory_stats] → Memories: 1,250 (500 episodes, 450 prototypes) Associations: 8,500 Hebbian edges Domains: engineering, user:alice, project:acme Avg quality: valence 0.65, importance 0.72 ``` ### memory_delete — Forget one memory For when Claude stored a wrong fact or the user explicitly asks to forget something specific. ``` You: "Forget what I said about Railway — we moved to Fly.io" Claude: [calls memory_read to find the atom_id, then memory_delete] atom_id: "550e8400-e29b-41d4-a716-446655440000" → Deleted memory 550e8400-.... ``` | Name | Type | Required | Description | | ---- | ---- | -------- | ----------- | | `atom_id` | string | Yes | The atom_id of the memory to delete (from memory_read results) | Idempotent — deleting an already-gone memory returns "No memory found with id ..." instead of an error. ### memory_delete_domain — Wipe an entire topic For broad cleanups when the user wants to forget everything in a namespace — e.g. "wipe my benchmark experiments". This is much more destructive than `memory_delete`, so Claude should only call it after an explicit user request, and the `confirm: true` parameter is a safety interlock enforced by the schema. ``` You: "Wipe everything about project:old-client" Claude: [calls memory_delete_domain] domain: "project:old-client" confirm: true → Deleted 42 memories from domain "project:old-client". ``` | Name | Type | Required | Description | | ---- | ---- | -------- | ----------- | | `domain` | string | Yes | The domain namespace to wipe (must match exactly) | | `confirm` | literal true | Yes | Safety interlock — must be exactly `true` | ## What to Remember | Category | Example | |----------|---------| | **Preferences** | "User prefers dark mode", "Always use pnpm, not npm" | | **Project context** | "This repo uses PostgreSQL + Prisma", "Deploy target is Railway" | | **Lessons learned** | "Never deploy on Fridays", "Run tests before push" | | **Decisions** | "Chose REST over GraphQL for caching simplicity" | | **People** | "Alice owns the design system", "Bob reviews all API changes" | | **Patterns** | "Exponential backoff fixed timeout issues in this service" | ## Universal Memory Same API key, same memories — across all tools. ``` ┌── Claude Code ← you are here ├── Claude Desktop ← you are here Mnemoverse API ──├── Cursor / VS Code / Windsurf (one memory) ├── ChatGPT (Custom Actions) └── Python SDK / REST ``` Write a memory in Claude Code → ChatGPT reads it. Learn something in Claude Desktop → Cursor knows it. ## Configuration | Variable | Required | Default | |----------|----------|---------| | `MNEMOVERSE_API_KEY` | Yes | — | | `MNEMOVERSE_API_URL` | No | `https://core.mnemoverse.com/api/v1` | ## Source & Distribution [![npm version](https://img.shields.io/npm/v/@mnemoverse/mcp-memory-server.svg?color=cb3837&label=npm)](https://www.npmjs.com/package/@mnemoverse/mcp-memory-server) [![MCP Registry](https://img.shields.io/badge/MCP_Registry-listed-0ea5e9)](https://registry.modelcontextprotocol.io/v0.1/servers?search=mnemoverse) The MCP server is also listed on the [Official MCP Registry](https://registry.modelcontextprotocol.io/v0.1/servers?search=mnemoverse). Source on [GitHub](https://github.com/mnemoverse/mcp-memory-server) under MIT. ## Related - [Cursor, VS Code & Windsurf](/api/editors) — editor integrations - [ChatGPT](/api/chatgpt) — give any Custom GPT persistent memory - [Python SDK](/api/python-sdk) — for scripts and backends - [API Reference](/api/reference) — full endpoint documentation --- URL: https://mnemoverse.com/docs/api/editors # Cursor, VS Code & Windsurf ::: tip For AI Agents Machine-readable docs: [mnemoverse/llms.txt](https://raw.githubusercontent.com/mnemoverse/.github/main/llms.txt) (works in all sandboxes) ::: Give your code editor persistent memory. One config file — it remembers across sessions. **Get your API key** at [console.mnemoverse.com](https://console.mnemoverse.com) (free tier, no credit card). If you already have a key from Claude Code or ChatGPT — **use the same one**. That's the point. ## VS Code + GitHub Copilot Chat Agent Mode — one-click install If you use **GitHub Copilot Chat in Agent Mode** on VS Code 1.102 or newer, install the [**Mnemoverse Memory** extension](https://marketplace.visualstudio.com/items?itemName=Mnemoverse.mnemoverse-vscode) from the VS Code Marketplace — no `mcp.json` file to edit, no JSON to commit. ```bash code --install-extension Mnemoverse.mnemoverse-vscode ``` Or search **"Mnemoverse Memory"** in the Extensions sidebar and click **Install**. After install, open Copilot Chat (`Cmd/Ctrl+Shift+I`), switch the mode picker to **Agent**, and on first run VS Code prompts for an API key (stored in the OS keychain, never on disk). You can also run `Mnemoverse: Set API Key` from the command palette. > This extension is specifically for **GitHub Copilot Chat Agent Mode** — the only chat client in the VS Code ecosystem that consumes MCP servers registered via the native `vscode.lm` API. If you use Cursor, Windsurf, or a different assistant inside VS Code, keep reading and use the manual JSON config below instead. | Chat client | Install path | | --- | --- | | **VS Code + Copilot Chat — Agent Mode** | Marketplace extension above (one click) | | **VS Code + Copilot Chat — Ask / Edit Mode** | Not supported — MCP servers only run in Agent Mode | | **VS Code — any other extension** | Manual `.vscode/mcp.json` (see VS Code section below) | | **Cursor** | `.cursor/mcp.json` (see Cursor section below) | | **Windsurf** | `~/.codeium/windsurf/mcp_config.json` (see Windsurf section below) | The extension is open-source on [github.com/mnemoverse/mnemoverse-vscode](https://github.com/mnemoverse/mnemoverse-vscode) under MIT, and also listed on [Open VSX](https://open-vsx.org/extension/mnemoverse/mnemoverse-vscode) for VSCodium / Gitpod / Theia users. ## Manual JSON config (Cursor, VS Code, Windsurf) > ⚠️ **Restart your editor** after editing the config file. MCP servers are only picked up on client startup. **Why `@latest`?** It forces a registry metadata lookup on every session start so you always get the newest release — bare `npx @mnemoverse/mcp-memory-server` caches the first installed version indefinitely. ## Try It Say to your editor AI: > "Remember that I always deploy to Railway using `railway up`" Then in a new session: > "Deploy this project" It remembers. ## Tools The MCP server gives your editor 6 tools: | Tool | What it does | | ---- | ------------ | | `memory_write` | Store a preference, decision, or lesson | | `memory_read` | Search memories by natural language query | | `memory_feedback` | Report if a memory was helpful (+1) or wrong (-1) | | `memory_stats` | Check how many memories are stored | | `memory_delete` | Permanently delete a single memory by its id | | `memory_delete_domain` | Wipe all memories in a domain (requires explicit confirm) | For detailed parameter docs, see [Claude Code & Desktop](/api/claude#tools) or [API Reference](/api/reference). ## Universal Memory Same API key, same memories — across all tools. ``` ┌── Claude Code / Desktop Mnemoverse API ──├── Cursor / VS Code / Windsurf ← you are here (one memory) ├── ChatGPT (Custom Actions) └── Python SDK / REST ``` Write a memory in Cursor → Claude Code reads it. Learn something in VS Code → ChatGPT knows it. ## Configuration | Variable | Required | Default | |----------|----------|---------| | `MNEMOVERSE_API_KEY` | Yes | — | | `MNEMOVERSE_API_URL` | No | `https://core.mnemoverse.com/api/v1` | ## Source & Distribution [![npm version](https://img.shields.io/npm/v/@mnemoverse/mcp-memory-server.svg?color=cb3837&label=npm)](https://www.npmjs.com/package/@mnemoverse/mcp-memory-server) [![MCP Registry](https://img.shields.io/badge/MCP_Registry-listed-0ea5e9)](https://registry.modelcontextprotocol.io/v0.1/servers?search=mnemoverse) The MCP server is also listed on the [Official MCP Registry](https://registry.modelcontextprotocol.io/v0.1/servers?search=mnemoverse) — any MCP-aware client that browses the registry will find it there. Source code is open on [GitHub](https://github.com/mnemoverse/mcp-memory-server) under MIT. ## Related - [Claude Code & Desktop](/api/claude) — Claude integrations - [ChatGPT](/api/chatgpt) — give any Custom GPT persistent memory - [Python SDK](/api/python-sdk) — for scripts and backends - [API Reference](/api/reference) — full endpoint documentation --- URL: https://mnemoverse.com/docs/api/chatgpt # ChatGPT Integration ::: info Manual Setup Required ChatGPT Custom GPTs cannot self-configure via agent link. Follow the steps below to set up manually. ::: Give any Custom GPT persistent memory powered by Mnemoverse. Same memory as Claude Code, Cursor, and every other tool. ## How It Works ``` You → ChatGPT → GPT Action (HTTP) → core.mnemoverse.com │ X-Api-Key: mk_live_xxx ``` ChatGPT calls the Mnemoverse API directly through GPT Actions. No middleware, no proxy. Your GPT stores and recalls memories using the same API key as your other tools. ## Setup (5 minutes) ### Step 1. Get Your API Key Sign up at [console.mnemoverse.com](https://console.mnemoverse.com) and create a free API key. It starts with `mk_live_`. If you already have a key from Claude Code or Cursor — **use the same one**. That's the point. ### Step 2. Create a Custom GPT 1. Go to [chatgpt.com/gpts/editor](https://chatgpt.com/gpts/editor) 2. Click **Create a GPT** ### Step 3. Configure Actions 1. Scroll down to **Actions** → **Create new action** 2. Set **Authentication**: - Type: **API Key** - API Key: paste your `mk_live_YOUR_KEY` - Auth Type: **Custom** - Custom Header Name: `X-Api-Key` 3. In the **Schema** box, click **Import from URL** and paste: ``` https://mnemoverse.com/docs/openapi-gpt.yaml ``` You should see 4 actions appear: `writeMemory`, `readMemory`, `giveFeedback`, `getMemoryStats`. ### Step 4. Set the System Prompt Paste this into the **Instructions** field (customize to your needs): ``` You have persistent long-term memory via Mnemoverse. RULES: 1. BEFORE answering any question about preferences, past decisions, project setup, people, or anything that might have been discussed before — call readMemory first. 2. When the user shares a preference, makes a decision, teaches you something, or tells you something important — call writeMemory immediately. Don't wait to be asked. 3. After using memories to answer a question, call giveFeedback to report whether they were helpful (1.0) or not (-1.0). 4. Your memory persists across sessions and across tools. What you learn here is available in Claude Code, Cursor, and everywhere else. WHAT TO REMEMBER: - Preferences ("I prefer Railway over Heroku") - Decisions ("We chose PostgreSQL for this project") - Lessons ("Exponential backoff fixed the timeout issue") - People ("Alice owns the design system") - Project context ("Deploy target is staging.example.com") WHAT NOT TO REMEMBER: - Trivial small talk - Temporary context ("open this file") - Information already in the current conversation ``` ### Step 5. Test It Say to your GPT: > "Remember that I always deploy to Railway using `railway up`" The GPT calls `writeMemory`. Then start a **new conversation** and ask: > "How do I deploy?" The GPT calls `readMemory` and recalls your preference. Same memory, new session. ## Cross-Tool Proof The real magic: this memory is shared with every other tool. 1. **In Claude Code**: "Remember that the staging URL is staging.acme.com" 2. **In ChatGPT**: "What's the staging URL?" 3. ChatGPT recalls it. One memory, everywhere. ``` ┌── Claude Code (MCP) ├── Cursor (MCP) Mnemoverse API ──├── VS Code (MCP) (one memory) ├── ChatGPT (Actions) ← you are here ├── Python SDK └── REST API ``` ## Available Actions Your GPT gets 4 actions: | Action | What it does | |--------|-------------| | `writeMemory` | Store a preference, decision, or lesson | | `readMemory` | Search memories by natural language query | | `giveFeedback` | Report if a memory was helpful (+1) or wrong (-1) | | `getMemoryStats` | Check how many memories are stored | These are the same operations available in Claude Code, Cursor, and the Python SDK — just exposed through GPT Actions. ## Tips **Make it proactive.** The system prompt tells the GPT to store memories without being asked. This is key — users won't say "remember this", they'll just state preferences. **Use domains.** If your GPT serves a specific project, add `"domain": "project:acme"` to writes and reads. This keeps memories organized. **Feedback matters.** The `giveFeedback` action trains the memory system. Memories that get positive feedback rank higher in future searches. Over time, the most useful memories surface first. ## Limitations - GPT Actions have a timeout (~30 seconds). Mnemoverse responds in <200ms, so this is never an issue. - ChatGPT may not call actions on every turn. The system prompt guides it, but GPT-4 is better at following action instructions than GPT-3.5. - Rate limits apply per API key. Free tier: 1,000 queries/day. [Upgrade](https://console.mnemoverse.com) for more. ## OpenAPI Spec The full spec is at [`/openapi-gpt.yaml`](https://mnemoverse.com/docs/openapi-gpt.yaml). It defines 4 endpoints: - `POST /memory/write` — store a memory - `POST /memory/read` — search memories - `POST /memory/feedback` — rate usefulness - `GET /memory/stats` — memory statistics This is a curated subset of the [full API](/api/reference). For batch operations, consolidation, and advanced queries, use the [REST API](/api/reference) or [Python SDK](/api/python-sdk) directly. ## Related - [Claude Code & Desktop](/api/claude) — Claude integrations - [Cursor, VS Code & Windsurf](/api/editors) — editor integrations - [Python SDK](/api/python-sdk) — for scripts and backends - [API Reference](/api/reference) — full endpoint documentation - [Getting Started](/api/getting-started) — first API call in 2 minutes --- URL: https://mnemoverse.com/docs/api/python-sdk # Python SDK ## Installation ```bash pip install mnemoverse ``` Requires Python 3.10+. ## Quick Start ```python from mnemoverse import MnemoClient client = MnemoClient(api_key="mk_live_YOUR_KEY") # Store a memory result = client.write( "Retry with exponential backoff fixed the timeout issue", concepts=["retry", "backoff", "timeout"], domain="engineering" ) print(f"Stored: {result.stored}, ID: {result.atom_id}") # Query memories memories = client.read("how to handle timeouts?", top_k=5) for item in memories.items: print(f"[{item.relevance:.2f}] {item.content}") # Report outcome — the system learns what works client.feedback( atom_ids=[item.atom_id for item in memories.items], outcome=1.0, query_concepts=memories.query_concepts ) ``` ## Client Configuration ```python client = MnemoClient( api_key="mk_live_YOUR_KEY", base_url="https://core.mnemoverse.com", # default timeout=10.0, # seconds, default: 10 max_retries=3, # default: 3 ) ``` ## Methods ### write() Store a single memory. ```python result = client.write( content="Caching reduced API latency by 40%", concepts=["caching", "latency", "optimization"], domain="engineering", metadata={"source": "incident-report", "date": "2026-04-08"}, external_ref="incident-42" # idempotent — won't duplicate ) # result.stored: bool # result.atom_id: UUID | None # result.importance: float # result.reason: str ``` ### write_batch() Store up to 500 memories in one call. ```python items = [ {"content": "Memory 1", "concepts": ["a"]}, {"content": "Memory 2", "concepts": ["b"]}, ] result = client.write_batch(items) # result.total_count: int # result.stored_count: int # result.results: list[WriteBatchItemResult] ``` ### read() Query memories with semantic search + Hebbian expansion. ```python memories = client.read( query="how to optimize database queries?", top_k=10, domain="engineering", min_relevance=0.3, include_associations=True ) # memories.items: list[MemoryItem] # memories.episodic_hit: bool # memories.query_concepts: list[str] # memories.expanded_concepts: list[str] # memories.search_time_ms: float ``` ### feedback() Report outcomes to update valence and Hebbian associations. ```python response = client.feedback( atom_ids=[memories.items[0].atom_id], outcome=0.8, # -1.0 (failure) to +1.0 (success) query_concepts=memories.query_concepts # enables co-activation learning ) # response.updated_count: int # response.avg_valence: float # response.coactivation_edges: int ``` ### stats() Get memory statistics. ```python stats = client.stats() # stats.total_atoms: int # stats.hebbian_edges: int # stats.avg_valence: float # stats.domains: list[str] ``` ### health() Check API health. ```python health = client.health() # health.status: str ("ok") # health.database: bool ``` ## Async Client For async applications (FastAPI, Discord bots, etc.): ```python from mnemoverse import AsyncMnemoClient client = AsyncMnemoClient(api_key="mk_live_YOUR_KEY") result = await client.write("async memory", concepts=["async"]) memories = await client.read("what about async?") await client.feedback(atom_ids=[...], outcome=1.0) ``` ## Error Handling ```python from mnemoverse import MnemoClient, MnemoAuthError, MnemoRateLimitError, MnemoError client = MnemoClient(api_key="mk_live_YOUR_KEY") try: result = client.read("query") except MnemoAuthError: print("Invalid API key") except MnemoRateLimitError as e: print(f"Rate limited. Retry after {e.retry_after}s") except MnemoError as e: print(f"API error: {e.message}") ``` ## Integration Examples ### LangChain Tool ```python from langchain.tools import tool from mnemoverse import MnemoClient memory = MnemoClient(api_key="mk_live_YOUR_KEY") @tool def remember(content: str, concepts: list[str]) -> str: """Store a memory for future reference.""" result = memory.write(content, concepts=concepts) return f"Stored: {result.stored}, importance: {result.importance:.2f}" @tool def recall(query: str) -> str: """Search memories for relevant information.""" results = memory.read(query, top_k=5) return "\n".join(f"- {item.content}" for item in results.items) ``` ### CrewAI Agent ```python from crewai import Agent, Task from mnemoverse import MnemoClient memory = MnemoClient(api_key="mk_live_YOUR_KEY") # Store experience after task completion def on_task_complete(task_output): memory.write( content=f"Task result: {task_output.summary}", concepts=task_output.tags, domain="crewai" ) memory.feedback(atom_ids=[...], outcome=task_output.score) ``` ## Source Code The SDK is open source: [github.com/mnemoverse/mnemoverse-sdk-python](https://github.com/mnemoverse/mnemoverse-sdk-python) --- URL: https://mnemoverse.com/docs/api/mcp-server # MCP Server [![npm version](https://img.shields.io/npm/v/@mnemoverse/mcp-memory-server.svg?color=cb3837&label=npm)](https://www.npmjs.com/package/@mnemoverse/mcp-memory-server) [![MCP Registry](https://img.shields.io/badge/MCP_Registry-listed-0ea5e9)](https://registry.modelcontextprotocol.io/v0.1/servers?search=mnemoverse) [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://github.com/mnemoverse/mcp-memory-server/blob/main/LICENSE) Give any MCP-compatible AI tool persistent memory. One config — it never forgets again. The `@mnemoverse/mcp-memory-server` npm package works with any tool that supports the [Model Context Protocol](https://modelcontextprotocol.io/). Listed on the [Official MCP Registry](https://registry.modelcontextprotocol.io/v0.1/servers?search=mnemoverse) — any MCP-aware client that browses the registry will find it there. ## Pick Your Tool - **[Claude Code & Claude Desktop](/api/claude)** — one command for Claude Code, JSON config for Desktop - **[Cursor, VS Code & Windsurf](/api/editors)** — JSON config per editor - **[ChatGPT](/api/chatgpt)** — GPT Actions (no MCP, direct API) - **[Python SDK](/api/python-sdk)** — `pip install mnemoverse` for scripts and backends ## How It Works ``` Your AI tool ←stdio→ @mnemoverse/mcp-memory-server ←HTTPS→ core.mnemoverse.com ``` The MCP server runs locally via `npx`, communicates with your AI tool over stdio, and calls the Mnemoverse API over HTTPS. Your memories are stored server-side — accessible from any tool with the same API key. ## Tools The server exposes 6 tools to your AI: | Tool | What it does | | ---- | ------------ | | `memory_write` | Store a preference, decision, or lesson | | `memory_read` | Search memories by natural language query | | `memory_feedback` | Report if a memory was helpful (+1) or wrong (-1) | | `memory_stats` | Check how many memories are stored | | `memory_delete` | Permanently delete a single memory by `atom_id` | | `memory_delete_domain` | Wipe an entire domain (requires explicit `confirm: true`) | See the [Claude Code & Desktop](/api/claude#tools) page for detailed parameter docs and examples for each tool. ## Universal Memory One API key. Every tool. Same memories. ``` ┌── Claude Code / Desktop Mnemoverse API ──├── Cursor / VS Code / Windsurf (one memory) ├── ChatGPT (Custom Actions) └── Python SDK / REST ``` Write a memory in Claude Code → Cursor reads it. Learn something in VS Code → ChatGPT knows it. ## Configuration | Variable | Required | Default | |----------|----------|---------| | `MNEMOVERSE_API_KEY` | Yes | — | | `MNEMOVERSE_API_URL` | No | `https://core.mnemoverse.com/api/v1` | Get your API key at [console.mnemoverse.com](https://console.mnemoverse.com) (free tier, no credit card). ## Source Code MIT licensed: [github.com/mnemoverse/mcp-memory-server](https://github.com/mnemoverse/mcp-memory-server) --- URL: https://mnemoverse.com/docs/api/agent-setup # Quick Setup ::: tip For AI Agents Machine-readable docs (plain text, no auth): - **GitHub** (works in all sandboxes): [mnemoverse/llms.txt](https://raw.githubusercontent.com/mnemoverse/.github/main/llms.txt) - **Website**: [mnemoverse.com/docs/llms.txt](https://mnemoverse.com/docs/llms.txt) Claude chat sandbox blocks most domains except github.com — use the GitHub link. ::: Set up Mnemoverse persistent memory in your AI tool. Pick your tool below. ## What You Need 1. A Mnemoverse API key (starts with `mk_live_`). If the user hasn't provided one, tell them to get a free key at [console.mnemoverse.com](https://console.mnemoverse.com). 2. Know which tool you are running in (see below). ## Step 1: Detect Your Environment Determine which AI tool you are running in, then follow the matching setup. ### Claude Code Run this command: ```bash claude mcp add mnemoverse \ -e MNEMOVERSE_API_KEY=mk_live_USER_KEY \ -- npx -y @mnemoverse/mcp-memory-server ``` Replace `mk_live_USER_KEY` with the actual key. Done after this command succeeds. ### Cursor Create the file `.cursor/mcp.json` in the project root: ```json { "mcpServers": { "mnemoverse": { "command": "npx", "args": ["-y", "@mnemoverse/mcp-memory-server"], "env": { "MNEMOVERSE_API_KEY": "mk_live_USER_KEY" } } } } ``` Tell the user to restart Cursor after you create this file. ### VS Code Create the file `.vscode/mcp.json` in the project root: ```json { "servers": { "mnemoverse": { "command": "npx", "args": ["-y", "@mnemoverse/mcp-memory-server"], "env": { "MNEMOVERSE_API_KEY": "mk_live_USER_KEY" } } } } ``` Tell the user to restart VS Code after you create this file. ### Claude Desktop Tell the user to add this to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows): ```json { "mcpServers": { "mnemoverse": { "command": "npx", "args": ["-y", "@mnemoverse/mcp-memory-server"], "env": { "MNEMOVERSE_API_KEY": "mk_live_USER_KEY" } } } } ``` Claude Desktop requires manual config edit and restart. ### Windsurf Create or edit `~/.codeium/windsurf/mcp_config.json`: ```json { "mcpServers": { "mnemoverse": { "command": "npx", "args": ["-y", "@mnemoverse/mcp-memory-server"], "env": { "MNEMOVERSE_API_KEY": "mk_live_USER_KEY" } } } } ``` Tell the user to restart Windsurf after you create this file. ### ChatGPT (Custom GPT) You cannot self-configure inside ChatGPT. Tell the user to follow the manual setup guide: [ChatGPT Integration](/api/chatgpt). ### Other Tools If the tool supports MCP, use the Cursor config format (it is the most common). If the tool does not support MCP, use the REST API directly: ```bash # Write a memory curl -X POST https://core.mnemoverse.com/api/v1/memory/write \ -H "X-Api-Key: mk_live_USER_KEY" \ -H "Content-Type: application/json" \ -d '{"content": "Test memory", "concepts": ["test"]}' # Read memories curl -X POST https://core.mnemoverse.com/api/v1/memory/read \ -H "X-Api-Key: mk_live_USER_KEY" \ -H "Content-Type: application/json" \ -d '{"query": "test", "top_k": 5}' ``` ## Step 2: Verify After setup, test by writing and reading a memory: 1. Store: "Remember that this is a test memory for setup verification" 2. Read: "What test memories do I have?" If the read returns the test memory, setup is complete. ## Step 3: Tell the User Confirm to the user: - Memory is configured and working - Memories persist across sessions - Same memories are accessible from any tool using the same API key - You will proactively remember preferences, decisions, and lessons ## Available Tools After Setup | Tool | What it does | |------|-------------| | `memory_write` | Store a preference, decision, or lesson | | `memory_read` | Search memories by natural language query | | `memory_feedback` | Report if a memory was helpful (+1) or wrong (-1) | | `memory_stats` | Check how many memories are stored | --- URL: https://mnemoverse.com/docs/api/security # Security ## Tenant Isolation Every API key is bound to a unique `organization_id`. All database queries are scoped to your organization. There is no way to access another tenant's data. ```text Your API key → organization_id → all queries filtered by org_id ``` Atoms, Hebbian edges, consolidation runs, and usage metrics are all isolated per tenant. ## API Key Security - API keys are **hashed** (SHA-256) before storage. We never store plaintext keys. - Key comparison uses **constant-time** comparison (`secrets.compare_digest`) to prevent timing attacks. - Keys are shown exactly **once** at creation. If lost, revoke and create a new one. - Key prefix (`mk_live_...`) is stored for identification without exposing the full key. ## Data Privacy - **We do not train on your data.** Your memories are used solely to serve your queries. - **No cross-tenant learning.** Hebbian associations and valence updates are per-tenant. ## Transport Security All API traffic uses HTTPS. ## Rate Limiting Per-tenant rate limiting prevents abuse: | Plan | Requests/min | Daily queries | Stored atoms | | --- | --- | --- | --- | | Free | 60 | 1,000 | 10,000 | | Pro | 600 | 50,000 | 500,000 | | Team | 3,000 | 500,000 | 5,000,000 | Exceeding limits returns HTTP 429 with `Retry-After` header. ## Infrastructure - **Database:** PostgreSQL 17 with pgvector extension - **Hosting:** Railway (US region) ## Reporting Security Issues Report vulnerabilities to [security@mnemoverse.com](mailto:security@mnemoverse.com). --- URL: https://mnemoverse.com/docs/api/overview # Memory API Persistent memory for AI agents. Not vector search — statistical learning. **[Sign up free at console.mnemoverse.com](https://console.mnemoverse.com/sign-up)** — no credit card, 1K queries/day. ## 3 Lines of Python ```python from mnemoverse import MnemoClient client = MnemoClient(api_key="mk_...") # Store a memory client.write("Retry with exponential backoff fixed the timeout issue", concepts=["retry", "backoff", "timeout"]) # Query — Hebbian associations expand "timeout" → "backoff", "retry" results = client.read("how to handle timeouts?") # Report outcome — system learns what works client.feedback(atom_ids=[r.atom_id for r in results.items], outcome=1.0) ``` ## Not a Vector Database | | Mnemoverse | Pinecone / Weaviate / Chroma | | --- | --- | --- | | **Core model** | Statistical learning (Rescorla-Wagner + Hebbian) | Vector embeddings (cosine similarity) | | **Learns from outcomes** | Yes — feedback loop updates valence | No — static retrieval | | **Concept associations** | Three-factor Hebbian graph | None | | **Memory compression** | HDBSCAN consolidation + Von Restorff | Accumulate forever | | **Query expansion** | Automatic via learned associations | Manual or none | | **Starting price** | Free (1K queries/day) | $25-70/mo | We don't just store and retrieve. We **learn which memories matter**. ## How It Works ```text 1. WRITE → Importance gate filters noise → Stored with semantic embedding 2. READ → Hebbian expansion → Valence-boosted ranking → Results 3. FEEDBACK → Outcome updates valence → Strengthens associations 4. CONSOLIDATE → HDBSCAN clusters similar → Prototypes + singletons ``` Memories that lead to good outcomes rank higher in future queries. Memories that consistently fail get suppressed. The system improves with use. ## Pricing | | Free | Pro | Team | | --- | --- | --- | --- | | **Price** | $0/mo | $29/mo | $149/mo | | **Queries/day** | 1,000 | 50,000 | 500,000 | | **Atoms (memories)** | 10,000 | 500,000 | 5,000,000 | | **Rate limit** | 60/min | 600/min | 3,000/min | | **API keys** | 1 | Unlimited | Unlimited | | **Support** | Community | Email | Slack | ## Built on Research - **Published**: [SLoD framework on arXiv](https://arxiv.org/abs/2603.08965) (Semantic Level of Detail) - **Benchmarked**: evaluated on LoCoMo (1,986 questions) and LongMemEval — see [Benchmarks](/technology/benchmarks) for the current numbers - **Validated**: experiments across game-playing agents and academic benchmarks ## Next Steps - [Getting Started](/api/getting-started) — First API call in 2 minutes - [API Reference](/api/reference) — Full endpoint documentation - [Python SDK](/api/python-sdk) — `pip install mnemoverse` - [MCP Server](/api/mcp-server) — Persistent memory for Claude and Cursor - [Security](/api/security) — Tenant isolation, data privacy, compliance --- URL: https://mnemoverse.com/docs/api/use-cases/coding-assistants # Memory for Coding Assistants Your AI coding assistant forgets your project between sessions. Fix it with persistent memory. ## The Problem You told Claude Code your deploy target is Railway. Tomorrow it suggests Heroku. You explained the database schema last week. Today it asks again. Every session is Groundhog Day. ## The Solution One line — and your coding assistant remembers everything across sessions, projects, and tools. ```bash claude mcp add mnemoverse \ -e MNEMOVERSE_API_KEY=mk_live_YOUR_KEY \ -- npx @mnemoverse/mcp-memory-server ``` For detailed setup per tool, see the integration guides: - **[Claude Code & Desktop](/api/claude)** - **[Cursor, VS Code & Windsurf](/api/editors)** Or give your agent this URL to self-configure: `https://raw.githubusercontent.com/mnemoverse/.github/main/llms.txt` ## What Gets Remembered Your coding assistant will proactively remember: | Category | Example | |----------|---------| | **Project setup** | "Uses PostgreSQL + Prisma, deployed on Railway" | | **Preferences** | "Always use pnpm", "Prefers Tailwind over CSS modules" | | **Decisions** | "Chose REST over GraphQL for caching simplicity" | | **Lessons** | "Never deploy on Fridays — learned the hard way" | | **People** | "Alice owns the design system, Bob reviews API changes" | | **Patterns** | "Exponential backoff fixed timeout issues in this service" | ## Cross-Tool Memory Write a memory in Claude Code → read it in Cursor. The same API key works everywhere. ``` Monday in Claude Code: You: "We always use feature branches, never push to main directly" Claude: [stores memory] Wednesday in Cursor: You: "Push this change" Cursor: [reads memory] "Created feature branch. PR to main as usual." ``` ## Recommended CLAUDE.md Setup Add this to your project's `CLAUDE.md` to make Claude use memory proactively: ```markdown ## Memory This project uses Mnemoverse for persistent memory across sessions. **Before answering** questions about preferences, past decisions, project setup, or anything discussed previously — call `memory_read` first. **After learning** something worth keeping (preference, decision, lesson, project fact) — call `memory_write` to store it. Don't rely only on local memory files — Mnemoverse memory is shared across all AI tools and team members. ``` ## Team Memory Same API key for the whole team → shared institutional knowledge: - New team member's AI already knows the project conventions - Decisions made in one person's session are available to everyone - Onboarding happens automatically through accumulated memory For isolation: use `domain: "user:{name}"` for personal preferences, `domain: "project:{name}"` for shared context. ## Get Started 1. [Get an API key](https://console.mnemoverse.com) (free, 30 seconds) 2. Pick your tool: [Claude](/api/claude) | [Editors](/api/editors) | [ChatGPT](/api/chatgpt) 3. Add the CLAUDE.md snippet to your project 4. Start coding — memory builds automatically --- URL: https://mnemoverse.com/docs/api/use-cases/chat-extensions # Universal Memory for AI Chat One memory across ChatGPT, Claude, Gemini, and every AI chat tool. Stop repeating yourself to each one. ## The Problem You told ChatGPT your tech stack. Claude doesn't know it. You explained your project to Gemini. Perplexity starts from scratch. Each AI lives in its own silo. Mnemoverse breaks the silos. One API key, one memory, every tool. ## How It Works ``` ┌── Claude Code / Desktop (MCP) ├── Cursor / VS Code / Windsurf (MCP) Mnemoverse API ──├── ChatGPT (Custom Actions) (one memory) ├── Gemini (REST API) ├── Perplexity (REST API) └── Your own app (Python SDK / REST) ``` Every tool connects to the same Mnemoverse API. Write a memory in one, recall it in any other. Your context follows you everywhere. ## Setup Guides Each tool has its own integration page with step-by-step setup: - **[Claude Code & Desktop](/api/claude)** — one command, MCP-based - **[Cursor, VS Code & Windsurf](/api/editors)** — JSON config, MCP-based - **[ChatGPT](/api/chatgpt)** — Custom GPT with Actions, OpenAPI spec Or give your agent this URL to self-configure: `https://raw.githubusercontent.com/mnemoverse/.github/main/llms.txt` ## Gemini The Mnemoverse REST API works with any tool that can make HTTP requests. Use the [Python SDK](/api/python-sdk) in a Google Colab notebook or integrate directly via REST. ```python from mnemoverse import MnemoClient client = MnemoClient(api_key="mk_live_YOUR_KEY") # Store context from a Gemini conversation client.write( "User's project uses FastAPI + PostgreSQL, deployed on GCP", concepts=["stack", "fastapi", "gcp"] ) # Later, in any tool — retrieve it results = client.read("what tech stack does the user have?") ``` ## Perplexity Same approach — use the REST API alongside Perplexity queries. Perplexity retrieves web knowledge; Mnemoverse retrieves *your* knowledge. ## The Key Idea RAG gives AI access to documents. Mnemoverse gives AI access to **you** — your preferences, decisions, lessons, and context. They complement each other: | | RAG | Mnemoverse | |---|---|---| | **Source** | Documents, knowledge bases | Conversations, user behavior | | **Answers** | "What does the docs say?" | "What did we discuss last time?" | | **Learns** | No | Yes — feedback loop improves relevance | | **Cross-tool** | No — per-app index | Yes — one memory, every tool | ## Get Started 1. [Get an API key](https://console.mnemoverse.com) (free) 2. Set up your primary tool ([Claude](/api/claude), [ChatGPT](/api/chatgpt), or [editors](/api/editors)) 3. Start using naturally — "remember that I prefer..." / "what do you know about my..." 4. Add more tools — they all share the same memory --- URL: https://mnemoverse.com/docs/api/use-cases/conversational-agent # Conversational Agent Memory Your chatbot forgets users between sessions. Mnemoverse fixes that. ## The Problem Every conversation starts from zero. Your agent asks "What's your name?" for the 10th time. Users hate it. ## The Solution Store user insights during conversation. Load them at the start of the next one. Per-user isolation via domains. ## How It Works ```python from mnemoverse import MnemoClient client = MnemoClient(api_key="mk_live_YOUR_KEY") # During conversation — agent learns something about the user client.write( "Alice prefers email notifications over Slack", concepts=["alice", "notifications", "email"], domain="user:alice" ) client.write( "Alice is on the Pro plan, working on a trading bot", concepts=["alice", "plan", "trading"], domain="user:alice" ) ``` ```python # Next session — agent loads context before responding memories = client.read( "What do I know about Alice?", domain="user:alice", top_k=10 ) # Feed memories into LLM prompt as context context = "\n".join([m.content for m in memories.items]) ``` ## Architecture ``` User message → Agent ↓ memory_read("user context", domain="user:{id}") ↓ LLM generates response (with memory context) ↓ memory_write(insights learned, domain="user:{id}") ↓ Response → User ``` ## Per-User Isolation Each user gets their own memory domain. Alice's memories never leak to Bob. ```python # Alice's conversation client.write("Prefers dark mode", domain="user:alice") # Bob's conversation client.write("Uses light mode, large fonts", domain="user:bob") # Reading Alice's context — only gets Alice's memories client.read("user preferences", domain="user:alice") # → "Prefers dark mode" ``` ## Feedback Loop When a memory helps the agent give a better answer, reinforce it: ```python memories = client.read("Alice's notification preferences") # Agent used this memory and user was happy client.feedback( atom_ids=[memories.items[0].atom_id], outcome=1.0 # Very helpful ) ``` Over time, useful memories surface first. Stale ones fade. ## Use Cases | Scenario | What to Remember | |----------|-----------------| | **Customer support** | User's plan, past issues, preferred contact method | | **Personal assistant** | Schedule preferences, dietary restrictions, travel habits | | **Tutoring bot** | Student's level, topics covered, learning pace | | **Sales agent** | Prospect's company, pain points, decision timeline | | **Health advisor** | Conditions, medications, goals, doctor preferences | ## Compared to RAG RAG retrieves from a static knowledge base. Mnemoverse remembers from *conversations*. RAG answers "what does the docs say?" — Mnemoverse answers "what did we discuss last time?" They complement each other: - RAG = product knowledge (docs, FAQ) - Mnemoverse = user knowledge (preferences, history, context) ## Get Started 1. [Get an API key](https://console.mnemoverse.com) (free, 30 seconds) 2. `pip install mnemoverse` 3. Add `write()` after conversations, `read()` before them 4. That's it — your agent now remembers users --- URL: https://mnemoverse.com/docs/api/use-cases/agent-frameworks # Experience Layer for Agent Frameworks Agents repeat mistakes. Mnemoverse gives them experience. ## The Problem Your LangChain agent solves a task, learns nothing, and starts from scratch next time. Multi-step agents waste tokens rediscovering what worked. CrewAI agents don't share learnings across runs. ## The Solution An **experience layer** between your agent framework and the LLM. Agents write what worked (and what didn't), read past experience before planning, and improve over time. ## LangChain ```python from mnemoverse import MnemoClient from langchain.tools import tool client = MnemoClient(api_key="mk_live_YOUR_KEY") @tool def remember(insight: str, concepts: list[str] = []) -> str: """Store a lesson learned for future reference.""" result = client.write(insight, concepts=concepts, domain="agent:experience") return f"Remembered (importance: {result.importance:.2f})" @tool def recall(query: str) -> str: """Check past experience before starting a task.""" memories = client.read(query, domain="agent:experience", top_k=5) if not memories.items: return "No relevant past experience." return "\n".join( f"- [{m.relevance:.0%}] {m.content}" for m in memories.items ) ``` Add to your agent: ```python from langchain.agents import create_tool_calling_agent agent = create_tool_calling_agent( llm=llm, tools=[remember, recall, ...your_other_tools], prompt=prompt # Include: "Always check recall() before starting tasks" ) ``` ## LangGraph Memory as a node in your graph: ```python from langgraph.graph import StateGraph def check_experience(state): """Node: check if we've done this before.""" memories = client.read(state["task"], domain="agent:experience") state["experience"] = [m.content for m in memories.items] return state def save_experience(state): """Node: save what we learned.""" if state.get("outcome"): client.write( f"Task: {state['task']} → {state['outcome']}", concepts=state.get("concepts", []), domain="agent:experience" ) return state graph = StateGraph(State) graph.add_node("check_experience", check_experience) graph.add_node("plan", plan_task) graph.add_node("execute", execute_task) graph.add_node("save_experience", save_experience) graph.add_edge("check_experience", "plan") graph.add_edge("plan", "execute") graph.add_edge("execute", "save_experience") ``` ## n8n Use the HTTP Request node to call Mnemoverse API directly: **Write memory:** - Method: `POST` - URL: `https://core.mnemoverse.com/api/v1/memory/write` - Headers: `X-Api-Key: mk_live_YOUR_KEY` - Body: `{"content": "$json.insight", "concepts": ["$json.topic"]}` **Read memory:** - Method: `POST` - URL: `https://core.mnemoverse.com/api/v1/memory/read` - Headers: `X-Api-Key: mk_live_YOUR_KEY` - Body: `{"query": "$json.question", "top_k": 5}` *Custom n8n node coming soon — [GitHub issue](https://github.com/mnemoverse/mcp-memory-server/issues).* ## CrewAI ```python from crewai import Agent, Task, Crew from crewai.tools import tool as crewai_tool @crewai_tool("Remember") def remember(insight: str) -> str: """Store experience for future tasks.""" result = client.write(insight, domain="agent:crew") return f"Stored: {result.atom_id}" @crewai_tool("Recall") def recall(query: str) -> str: """Check past experience.""" memories = client.read(query, domain="agent:crew", top_k=5) return "\n".join(m.content for m in memories.items) or "No experience." researcher = Agent( role="Researcher", tools=[remember, recall, ...], backstory="You learn from past research. Always check recall() first." ) ``` ## The Feedback Loop What makes this an *experience* layer, not just a *memory* layer: ```python # Agent completes task successfully client.feedback( atom_ids=[memory.atom_id for memory in used_memories], outcome=1.0 # This worked! ) # Agent's approach failed client.feedback( atom_ids=[memory.atom_id for memory in used_memories], outcome=-0.5 # This didn't work ) ``` Successful strategies rank higher in future searches. Failed approaches fade. **The agent gets better over time.** ## Multi-Agent Memory Sharing Different agents, same memory pool: ```python # Researcher agent finds something client.write( "API rate limit is 100 req/min, not 1000 as documented", concepts=["api", "rate-limit"], domain="project:acme" ) # Developer agent reads it later memories = client.read( "API rate limits for this project", domain="project:acme" ) # → Knows about the real rate limit ``` ## Get Started 1. [Get an API key](https://console.mnemoverse.com) (free) 2. `pip install mnemoverse` 3. Wrap `write`/`read` as tools for your framework 4. Add to agent prompt: "Always check past experience before planning" --- URL: https://mnemoverse.com/llms.txt#pricing # Pricing Live tiers — sign up and upgrade at https://console.mnemoverse.com. | Plan | Queries/day | Atoms | Rate limit | Price | |------|-------------|-------|------------|-------| | Free | 1,000 | 10,000 | 60/min | $0 | | Pro | 50,000 | 500,000 | 600/min | $29/mo | | Team | 500,000 | 5,000,000 | 3,000/min | $149/mo | | Enterprise | Unlimited | Unlimited | Custom | contact sales | The **Enterprise** tier includes unlimited atoms and queries, dedicated infrastructure, **SSO**, **audit logs**, SLA, and custom data residency. Contact sales via https://console.mnemoverse.com.