Agentβready documentation We design these docs to be useful for both humans and machines. Pages carry predictable metadata (frontmatter), stable IDs, and consistent section ordering. This enables automated indexing, MCP doc servers, and agent navigation.
π Start here π§ How this doc set is organized Topβlevel directories under /architecture
:
adapters/
β HTTP & MCP adapters, examples, compatibility matrixcontracts/
β canonical contracts and schemas across layersevaluation/
β benchmarks and metricsexperience-layer/
β L4 specifications and guidesmemory-layer/
β L5 policies, errors/timeouts, assemblernoosphere-layer/
β L1 architecture (vector/graph search, KG, MCP integration)orchestration/
β ACS/CEO/HCS specs, API, implementation notesproject-library/
β L2 models, indexing and operationsworkshop/
β L3 interfaces, tool registry, validation pipelinesEach directory has its own README.md
acting as a local index when present.
π§© Layers & components β Popular references π Machine interface (MCP) This documentation is served by our @mcp-x/mcp-docs-server which provides these tools:
list_documents
β List all available docs, optionally filtered by sectionget_document
β Get full content of a specific document by pathsearch_docs
β Full-text search through all documentationReal MCP configuration:
json {
"name" : "@mcp-x/mcp-docs-server" ,
"version" : "1.0.0" ,
"description" : "Universal MCP Server Demo - Template for documentation-based MCP servers" ,
"tools" : [ "list_documents" , "get_document" , "search_docs" ],
"sections" : [ "guides" , "api" , "tutorials" , "reference" , "examples" ],
"capabilities" : {
"search" : {
"maxResults" : 50 ,
"fuzzyThreshold" : 0.6
}
}
}
π Principles (human + machine) Stable IDs and links: each key doc has an id
and predictable path Consistent frontmatter: docType
, version
, status
, audience
, tags
Canonical anchors: Overview (concepts), Contracts Registry (interfaces) Deterministic navigation: autoβgenerated sidebar mirrors filesystem Clear degradation: drafts labeled; missing pages tolerated in soft link checks Conventions Version: this set documents the 0.1 line; drafts are clearly marked Ordering: pages may use order:
in frontmatter or numeric filename prefixes Acronyms: normalized (API, ACS, CEO, HCS, MCP, SLO/SLA, L1βL8, KV, ADR) Privacy: never include raw sensitive text; secure fields must be bypassed ποΈ System Architecture Core Components & Technology Stack Layer Technologies:
L1 (Noosphere) : Vector databases (Pinecone/Weaviate), Graph databases (Neo4j), Python/TypeScriptL2 (Project Library) : Document indexing (Elasticsearch), Git integration, Node.jsL3 (Workshop) : Tool orchestration, Docker containers, KubernetesL4 (Experience) : Time-series databases (InfluxDB), Event streaming (Kafka)L5 (Memory) : Redis caching, Memory-optimized instances, Smart evictionL8 (Evaluation) : Metrics collection (Prometheus), Jupyter notebooks, PythonInfrastructure Requirements:
Compute : 16+ vCPUs, 64+ GB RAM per layer for productionStorage : 1TB+ SSD for L1/L2, 100GB+ for other layersNetwork : 10Gbps+ for inter-layer communication, CDN for static contentData Flow Architecture
Request Lifecycle:
Query Ingestion (L7) β MCP protocol normalizationIntent Processing (L6) β Budget allocation, risk assessmentContext Assembly (L5) β Multi-layer content aggregationKnowledge Retrieval (L1/L2/L4) β Parallel provider executionResponse Generation (L5βL6βL7) β Budget-constrained assemblyQuality Monitoring (L8) β Continuous feedback loopsπ§ͺ Testing & Validation Testing Strategy Unit Testing Guidelines:
Coverage Target : 90%+ for core algorithms, 80%+ for integration layersFrameworks : Jest (TypeScript), pytest (Python), Go test (Go services)Focus Areas : Context assembly logic, budget enforcement, provider selectionIntegration Testing:
Inter-layer Communication : Test L1βL2βL5 data flows end-to-endProvider Integration : Mock external services, validate contract compliancePerformance Testing : Load testing with realistic query patternsError Scenarios : Network partitions, service degradation, timeout handlingEnd-to-End Testing:
Critical User Journeys : Debug session walkthrough, research query flowCross-platform : VS Code extension, CLI tools, web interfacePerformance Benchmarks : P95 < 800ms end-to-end, throughput > 100 QPSQuality Gates Pre-deployment Validation:
bash # Performance benchmarks
npm run test:performance -- --target-p95=800ms
# Security scanning
npm run test:security -- --mode=privacy-compliance
# Integration validation
npm run test:integration -- --layers=L1,L2,L5,L8
# Documentation coverage
npm run docs:validate -- --coverage-min=95%
Acceptance Criteria:
Quality metrics: NDCG@10 β₯ 0.60, Precision@5 β₯ 0.50 Performance: P95 β€ 800ms, error rate < 1% Privacy: Zero leakage in block mode, effective redaction π§ Development & Operations Development Environment Setup Prerequisites:
bash # Required tools
node > = 18.0.0
python > = 3.9
docker > = 24.0
kubernetes > = 1.28
# Development dependencies
npm install -g @mnemoverse/cli
pip install mnemoverse-dev-tools
Local Development Stack:
yaml # docker-compose.dev.yml
version : '3.8'
services :
redis :
image : redis:7-alpine
ports : [ "6379:6379" ]
postgres :
image : postgres:15
environment :
POSTGRES_DB : mnemoverse_dev
POSTGRES_PASSWORD : dev_password
ports : [ "5432:5432" ]
vector-db :
image : pinecone/pinecone-local:latest
ports : [ "8080:8080" ]
Deployment Architecture Production Environment:
Staging : Feature validation, integration testingProduction : Blue-green deployment, gradual rolloutDR Environment : Cross-region backup, RTO < 1 hourCI/CD Pipeline:
yaml # .github/workflows/deploy.yml
stages :
- test : [ unit , integration , e2e ]
- security : [ dependency-scan , secret-scan , privacy-validation ]
- build : [ docker-build , artifact-registry ]
- deploy : [ staging , production-canary , production-full ]
Operational Scenarios Scaling Procedures:
Horizontal Scaling : Add provider instances behind load balancerVertical Scaling : Increase memory/CPU for memory-intensive L5 operationsData Scaling : Shard knowledge base by domain/project boundariesMaintenance Windows:
Weekly : Dependency updates, security patches (30min maintenance window)Monthly : Performance optimization, capacity planning (2hr maintenance window)Quarterly : Major version upgrades, architectural improvements (4hr maintenance window)Disaster Recovery:
Backup Strategy : Daily snapshots, cross-region replicationRecovery Procedures : Automated failover, manual validation stepsRTO Target : < 1 hour for critical services, < 4 hours for full restorationKey Metrics System Performance:
Latency : P50/P95/P99 response times per layerThroughput : Queries per second, concurrent user capacityAvailability : 99.9% uptime target, planned maintenance exclusionsQuality Metrics:
Search Quality : NDCG@K scores, user satisfaction ratingsContext Relevance : Fragment usefulness, contradiction ratesLearning Effectiveness : Experience pattern accuracy, improvement trendsResource Utilization:
txt # Example monitoring queries
mnemoverse_memory_usage_ratio{layer="L5"} > 0.85
mnemoverse_query_latency_p95{layer="orchestration"} > 800
mnemoverse_error_rate{service="context-assembly"} > 0.01
Alerting & Escalation Critical Alerts (PagerDuty integration):
Service unavailability > 5 minutes Error rate > 5% for > 2 minutes P95 latency > 1.5s for > 5 minutes Warning Alerts (Slack integration):
Memory usage > 85% Quality metrics below baseline Unusual traffic patterns Status Work in progress. Some deepβdive drafts live under /architecture/orchestration
and may reference upcoming pages. The Overview and Contracts Registry are the canonical sources to align on first.
π€ Contribute Keep frontmatter flat where possible (simple key: value
) for robust tooling Prefer stable slugs; avoid renames without adding redirects Add order:
to control sibling ordering when needed Use short, descriptive titles; keep acronyms normalized Agentβready documentation We design these docs to be useful for both humans and machines. Pages carry predictable metadata (frontmatter), stable IDs, and consistent section ordering. This enables automated indexing, MCP doc servers, and agent navigation.
π Start here π§ How this doc set is organized Topβlevel directories under /architecture
:
adapters/
β HTTP & MCP adapters, examples, compatibility matrixcontracts/
β canonical contracts and schemas across layersevaluation/
β benchmarks and metricsexperience-layer/
β L4 specifications and guidesmemory-layer/
β L5 policies, errors/timeouts, assemblernoosphere-layer/
β L1 architecture (vector/graph search, KG, MCP integration)orchestration/
β ACS/CEO/HCS specs, API, implementation notesproject-library/
β L2 models, indexing and operationsworkshop/
β L3 interfaces, tool registry, validation pipelinesEach directory has its own README.md
acting as a local index when present.
π§© Layers & components β Popular references π Machine interface (MCP) This documentation is served by our @mcp-x/mcp-docs-server which provides these tools:
list_documents
β List all available docs, optionally filtered by sectionget_document
β Get full content of a specific document by pathsearch_docs
β Full-text search through all documentationReal MCP configuration:
json {
"name" : "@mcp-x/mcp-docs-server" ,
"version" : "1.0.0" ,
"description" : "Universal MCP Server Demo - Template for documentation-based MCP servers" ,
"tools" : [ "list_documents" , "get_document" , "search_docs" ],
"sections" : [ "guides" , "api" , "tutorials" , "reference" , "examples" ],
"capabilities" : {
"search" : {
"maxResults" : 50 ,
"fuzzyThreshold" : 0.6
}
}
}
π Principles (human + machine) Stable IDs and links: each key doc has an id
and predictable path Consistent frontmatter: docType
, version
, status
, audience
, tags
Canonical anchors: Overview (concepts), Contracts Registry (interfaces) Deterministic navigation: autoβgenerated sidebar mirrors filesystem Clear degradation: drafts labeled; missing pages tolerated in soft link checks Conventions Version: this set documents the 0.1 line; drafts are clearly marked Ordering: pages may use order:
in frontmatter or numeric filename prefixes Acronyms: normalized (API, ACS, CEO, HCS, MCP, SLO/SLA, L1βL8, KV, ADR) Privacy: never include raw sensitive text; secure fields must be bypassed ποΈ System Architecture Core Components & Technology Stack Layer Technologies:
L1 (Noosphere) : Vector databases (Pinecone/Weaviate), Graph databases (Neo4j), Python/TypeScriptL2 (Project Library) : Document indexing (Elasticsearch), Git integration, Node.jsL3 (Workshop) : Tool orchestration, Docker containers, KubernetesL4 (Experience) : Time-series databases (InfluxDB), Event streaming (Kafka)L5 (Memory) : Redis caching, Memory-optimized instances, Smart evictionL8 (Evaluation) : Metrics collection (Prometheus), Jupyter notebooks, PythonInfrastructure Requirements:
Compute : 16+ vCPUs, 64+ GB RAM per layer for productionStorage : 1TB+ SSD for L1/L2, 100GB+ for other layersNetwork : 10Gbps+ for inter-layer communication, CDN for static contentData Flow Architecture
Request Lifecycle:
Query Ingestion (L7) β MCP protocol normalizationIntent Processing (L6) β Budget allocation, risk assessmentContext Assembly (L5) β Multi-layer content aggregationKnowledge Retrieval (L1/L2/L4) β Parallel provider executionResponse Generation (L5βL6βL7) β Budget-constrained assemblyQuality Monitoring (L8) β Continuous feedback loopsπ§ͺ Testing & Validation Testing Strategy Unit Testing Guidelines:
Coverage Target : 90%+ for core algorithms, 80%+ for integration layersFrameworks : Jest (TypeScript), pytest (Python), Go test (Go services)Focus Areas : Context assembly logic, budget enforcement, provider selectionIntegration Testing:
Inter-layer Communication : Test L1βL2βL5 data flows end-to-endProvider Integration : Mock external services, validate contract compliancePerformance Testing : Load testing with realistic query patternsError Scenarios : Network partitions, service degradation, timeout handlingEnd-to-End Testing:
Critical User Journeys : Debug session walkthrough, research query flowCross-platform : VS Code extension, CLI tools, web interfacePerformance Benchmarks : P95 < 800ms end-to-end, throughput > 100 QPSQuality Gates Pre-deployment Validation:
bash # Performance benchmarks
npm run test:performance -- --target-p95=800ms
# Security scanning
npm run test:security -- --mode=privacy-compliance
# Integration validation
npm run test:integration -- --layers=L1,L2,L5,L8
# Documentation coverage
npm run docs:validate -- --coverage-min=95%
Acceptance Criteria:
Quality metrics: NDCG@10 β₯ 0.60, Precision@5 β₯ 0.50 Performance: P95 β€ 800ms, error rate < 1% Privacy: Zero leakage in block mode, effective redaction π§ Development & Operations Development Environment Setup Prerequisites:
bash # Required tools
node > = 18.0.0
python > = 3.9
docker > = 24.0
kubernetes > = 1.28
# Development dependencies
npm install -g @mnemoverse/cli
pip install mnemoverse-dev-tools
Local Development Stack:
yaml # docker-compose.dev.yml
version : '3.8'
services :
redis :
image : redis:7-alpine
ports : [ "6379:6379" ]
postgres :
image : postgres:15
environment :
POSTGRES_DB : mnemoverse_dev
POSTGRES_PASSWORD : dev_password
ports : [ "5432:5432" ]
vector-db :
image : pinecone/pinecone-local:latest
ports : [ "8080:8080" ]
Deployment Architecture Production Environment:
Staging : Feature validation, integration testingProduction : Blue-green deployment, gradual rolloutDR Environment : Cross-region backup, RTO < 1 hourCI/CD Pipeline:
yaml # .github/workflows/deploy.yml
stages :
- test : [ unit , integration , e2e ]
- security : [ dependency-scan , secret-scan , privacy-validation ]
- build : [ docker-build , artifact-registry ]
- deploy : [ staging , production-canary , production-full ]
Operational Scenarios Scaling Procedures:
Horizontal Scaling : Add provider instances behind load balancerVertical Scaling : Increase memory/CPU for memory-intensive L5 operationsData Scaling : Shard knowledge base by domain/project boundariesMaintenance Windows:
Weekly : Dependency updates, security patches (30min maintenance window)Monthly : Performance optimization, capacity planning (2hr maintenance window)Quarterly : Major version upgrades, architectural improvements (4hr maintenance window)Disaster Recovery:
Backup Strategy : Daily snapshots, cross-region replicationRecovery Procedures : Automated failover, manual validation stepsRTO Target : < 1 hour for critical services, < 4 hours for full restorationKey Metrics System Performance:
Latency : P50/P95/P99 response times per layerThroughput : Queries per second, concurrent user capacityAvailability : 99.9% uptime target, planned maintenance exclusionsQuality Metrics:
Search Quality : NDCG@K scores, user satisfaction ratingsContext Relevance : Fragment usefulness, contradiction ratesLearning Effectiveness : Experience pattern accuracy, improvement trendsResource Utilization:
txt # Example monitoring queries
mnemoverse_memory_usage_ratio{layer="L5"} > 0.85
mnemoverse_query_latency_p95{layer="orchestration"} > 800
mnemoverse_error_rate{service="context-assembly"} > 0.01
Alerting & Escalation Critical Alerts (PagerDuty integration):
Service unavailability > 5 minutes Error rate > 5% for > 2 minutes P95 latency > 1.5s for > 5 minutes Warning Alerts (Slack integration):
Memory usage > 85% Quality metrics below baseline Unusual traffic patterns Status Work in progress. Some deepβdive drafts live under /architecture/orchestration
and may reference upcoming pages. The Overview and Contracts Registry are the canonical sources to align on first.
π€ Contribute Keep frontmatter flat where possible (simple key: value
) for robust tooling Prefer stable slugs; avoid renames without adding redirects Add order:
to control sibling ordering when needed Use short, descriptive titles; keep acronyms normalized