Skip to content

Experience Layer for Agent Frameworks ​

Agents repeat mistakes. Mnemoverse gives them experience.

The Problem ​

Your LangChain agent solves a task, learns nothing, and starts from scratch next time. Multi-step agents waste tokens rediscovering what worked. CrewAI agents don't share learnings across runs.

The Solution ​

An experience layer between your agent framework and the LLM. Agents write what worked (and what didn't), read past experience before planning, and improve over time.

LangChain ​

python
from mnemoverse import MnemoClient
from langchain.tools import tool

client = MnemoClient(api_key="mk_live_YOUR_KEY")

@tool
def remember(insight: str, concepts: list[str] = []) -> str:
    """Store a lesson learned for future reference."""
    result = client.write(insight, concepts=concepts, domain="agent:experience")
    return f"Remembered (importance: {result.importance:.2f})"

@tool  
def recall(query: str) -> str:
    """Check past experience before starting a task."""
    memories = client.read(query, domain="agent:experience", top_k=5)
    if not memories.items:
        return "No relevant past experience."
    return "\n".join(
        f"- [{m.relevance:.0%}] {m.content}" for m in memories.items
    )

Add to your agent:

python
from langchain.agents import create_tool_calling_agent

agent = create_tool_calling_agent(
    llm=llm,
    tools=[remember, recall, ...your_other_tools],
    prompt=prompt  # Include: "Always check recall() before starting tasks"
)

LangGraph ​

Memory as a node in your graph:

python
from langgraph.graph import StateGraph

def check_experience(state):
    """Node: check if we've done this before."""
    memories = client.read(state["task"], domain="agent:experience")
    state["experience"] = [m.content for m in memories.items]
    return state

def save_experience(state):
    """Node: save what we learned."""
    if state.get("outcome"):
        client.write(
            f"Task: {state['task']} → {state['outcome']}",
            concepts=state.get("concepts", []),
            domain="agent:experience"
        )
    return state

graph = StateGraph(State)
graph.add_node("check_experience", check_experience)
graph.add_node("plan", plan_task)
graph.add_node("execute", execute_task)
graph.add_node("save_experience", save_experience)

graph.add_edge("check_experience", "plan")
graph.add_edge("plan", "execute")
graph.add_edge("execute", "save_experience")

n8n ​

Use the HTTP Request node to call Mnemoverse API directly:

Write memory:

  • Method: POST
  • URL: https://core.mnemoverse.com/api/v1/memory/write
  • Headers: X-Api-Key: mk_live_YOUR_KEY
  • Body: {"content": "$json.insight", "concepts": ["$json.topic"]}

Read memory:

  • Method: POST
  • URL: https://core.mnemoverse.com/api/v1/memory/read
  • Headers: X-Api-Key: mk_live_YOUR_KEY
  • Body: {"query": "$json.question", "top_k": 5}

Custom n8n node coming soon — GitHub issue.

CrewAI ​

python
from crewai import Agent, Task, Crew
from crewai.tools import tool as crewai_tool

@crewai_tool("Remember")
def remember(insight: str) -> str:
    """Store experience for future tasks."""
    result = client.write(insight, domain="agent:crew")
    return f"Stored: {result.atom_id}"

@crewai_tool("Recall")
def recall(query: str) -> str:
    """Check past experience."""
    memories = client.read(query, domain="agent:crew", top_k=5)
    return "\n".join(m.content for m in memories.items) or "No experience."

researcher = Agent(
    role="Researcher",
    tools=[remember, recall, ...],
    backstory="You learn from past research. Always check recall() first."
)

The Feedback Loop ​

What makes this an experience layer, not just a memory layer:

python
# Agent completes task successfully
client.feedback(
    atom_ids=[memory.atom_id for memory in used_memories],
    outcome=1.0  # This worked!
)

# Agent's approach failed
client.feedback(
    atom_ids=[memory.atom_id for memory in used_memories],
    outcome=-0.5  # This didn't work
)

Successful strategies rank higher in future searches. Failed approaches fade. The agent gets better over time.

Multi-Agent Memory Sharing ​

Different agents, same memory pool:

python
# Researcher agent finds something
client.write(
    "API rate limit is 100 req/min, not 1000 as documented",
    concepts=["api", "rate-limit"],
    domain="project:acme"
)

# Developer agent reads it later
memories = client.read(
    "API rate limits for this project",
    domain="project:acme"
)
# → Knows about the real rate limit

Get Started ​

  1. Get an API key (free)
  2. pip install mnemoverse
  3. Wrap write/read as tools for your framework
  4. Add to agent prompt: "Always check past experience before planning"