Skip to content

04 Advanced Configuration

doobidoo edited this page Nov 9, 2025 · 4 revisions

Advanced Configuration

Comprehensive guide for advanced MCP Memory Service configuration, integration patterns, and best practices.

Table of Contents

Best Practices

Memory Storage Guidelines

Write Clear, Searchable Content

❌ Bad: "Fixed the thing with the API"
✅ Good: "Fixed authentication timeout issue in /api/users endpoint by increasing JWT expiration to 24 hours"

Include Context and Specifics

❌ Bad: "Use this configuration"
✅ Good: "PostgreSQL connection pool configuration for production - handles 1000 concurrent connections"

❌ Bad: "Updated recently"  
✅ Good: "Updated Python to 3.11.4 on 2025-07-20 to fix asyncio performance issue"

Structure Long Content

# Meeting Notes - API Design Review
Date: 2025-07-20
Attendees: Team Lead, Backend Team

## Decisions:
- RESTful design for public API
- GraphQL for internal services

## Action Items:
- [ ] Create OpenAPI spec
- [ ] Set up API gateway

Tagging Strategy

Hierarchical Tagging

Use consistent hierarchies:

project: project-alpha
component: project-alpha-frontend
specific: project-alpha-frontend-auth

Standard Tag Categories

  1. Project/Product: project-name, product-x
  2. Technology: python, react, postgres
  3. Type: bug-fix, feature, documentation
  4. Status: completed, in-progress, blocked
  5. Priority: urgent, high, normal, low

Tag Naming Rules

  • Use lowercase: python not Python
  • Use hyphens: bug-fix not bug_fix
  • Be consistent: postgresql not postgres/pg/psql
  • Avoid versions in tags: python not python3.11

Search Optimization

Use Natural Language

✅ "What did we decide about authentication last week?"
✅ "Show me all Python debugging sessions"
✅ "Find memories about database optimization"

Combine Search Strategies

# Text search for broad matching
general_results = await memory.search("authentication")

# Tag search for precise filtering
tagged_results = await memory.search_by_tag(["auth", "security"])

# Time-based for recent context
recent_results = await memory.recall("last week")

Maintenance Routines

Daily (5 minutes)

Morning:
- Review yesterday's memories
- Tag any untagged entries
- Quick search test

Evening:
- Store key decisions/learnings
- Update task progress

Weekly (30 minutes)

  1. Run tag consolidation
  2. Archive completed project memories
  3. Review and improve poor tags
  4. Delete test/temporary memories
  5. Generate weekly summary

Monthly (1 hour)

  1. Analyze tag usage statistics
  2. Merge redundant tags
  3. Update tagging guidelines
  4. Performance optimization check
  5. Backup important memories

Integration Patterns

Development Tool Integrations

Git Hooks Integration

Automatically store commit information:

#!/bin/bash
# .git/hooks/post-commit

COMMIT_MSG=$(git log -1 --pretty=%B)
BRANCH=$(git branch --show-current)
FILES=$(git diff-tree --no-commit-id --name-only -r HEAD)

# Store in memory service
echo "Store commit memory: Branch: $BRANCH, Message: $COMMIT_MSG, Files: $FILES" | \
  mcp-memory-cli store --tags "git,commit,$BRANCH"

VS Code Extension Pattern

Create a command to store code snippets:

// extension.js
vscode.commands.registerCommand('mcp.storeSnippet', async () => {
    const editor = vscode.window.activeTextEditor;
    const selection = editor.document.getText(editor.selection);
    const language = editor.document.languageId;
    
    await mcpClient.storeMemory({
        content: `Code snippet:
\`\`\`${language}
${selection}
\`\`\``,
        tags: ['code-snippet', language, 'vscode']
    });
});

CI/CD Pipeline Integration

Store deployment information:

# .github/workflows/deploy.yml
- name: Store Deployment Memory
  run: |
    MEMORY="Deployment to ${{ github.event.inputs.environment }}
    Version: ${{ github.sha }}
    Status: ${{ job.status }}
    Timestamp: $(date -u +"%Y-%m-%dT%H:%M:%SZ")"
    
    curl -X POST http://localhost:8080/memory/store \
      -H "Content-Type: application/json" \
      -d "{\"content\": \"$MEMORY\", \"tags\": [\"deployment\", \"${{ github.event.inputs.environment }}\"]}"

Automation Patterns

Scheduled Memory Collection

Daily summary automation:

# daily_summary.py
import schedule
import asyncio
from datetime import datetime, timedelta

async def daily_memory_summary():
    # Collect today's memories
    today = datetime.now().date()
    memories = await memory_service.recall(f"today")
    
    # Generate summary
    summary = f"Daily Summary for {today}:\n"
    summary += f"- Total memories: {len(memories)}\n"
    summary += f"- Key topics: {extract_topics(memories)}\n"
    summary += f"- Completed tasks: {count_completed(memories)}\n"
    
    # Store summary
    await memory_service.store(
        content=summary,
        tags=["daily-summary", str(today)]
    )

# Schedule for 6 PM daily
schedule.every().day.at("18:00").do(lambda: asyncio.run(daily_memory_summary()))

Event-Driven Memory Creation

Automatically capture important events:

// error_logger.js
class MemoryErrorLogger {
    constructor(memoryService) {
        this.memory = memoryService;
    }
    
    async logError(error, context) {
        // Store error details
        await this.memory.store({
            content: `Error: ${error.message}
Stack: ${error.stack}
Context: ${JSON.stringify(context)}`,
            tags: ['error', 'automated', context.service]
        });
        
        // Check for similar errors
        const similar = await this.memory.search(`error ${error.message.split(' ')[0]}`);
        if (similar.length > 0) {
            console.log('Similar errors found:', similar.length);
        }
    }
}

API Integration

REST API Wrapper

Simple HTTP interface for memory operations:

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/memory/store', methods=['POST'])
async def store_memory():
    data = request.json
    result = await memory_service.store(
        content=data['content'],
        tags=data.get('tags', [])
    )
    return jsonify({"id": result.id})

@app.route('/memory/search', methods=['GET'])
async def search_memories():
    query = request.args.get('q')
    results = await memory_service.search(query)
    return jsonify([r.to_dict() for r in results])

Webhook Integration

Trigger memory storage from external services:

// webhook_handler.js
app.post('/webhook/github', async (req, res) => {
    const { action, pull_request, repository } = req.body;
    
    if (action === 'closed' && pull_request.merged) {
        await memoryService.store({
            content: `PR Merged: ${pull_request.title}
Repo: ${repository.name}
Files changed: ${pull_request.changed_files}`,
            tags: ['github', 'pr-merged', repository.name]
        });
    }
    
    res.status(200).send('OK');
});

Workflow Examples

Documentation Workflow

Automatically document decisions:

class DecisionLogger:
    def __init__(self, memory_service):
        self.memory = memory_service
    
    async def log_decision(self, decision_type, title, rationale, alternatives):
        content = f"""
        Decision: {title}
        Type: {decision_type}
        Date: {datetime.now().isoformat()}
        
        Rationale: {rationale}
        
        Alternatives Considered:
        {chr(10).join(f'- {alt}' for alt in alternatives)}
        """
        
        await self.memory.store(
            content=content,
            tags=['decision', decision_type, 'architecture']
        )

Team Knowledge Sharing

Broadcast important updates:

async def share_team_update(update_type, content, team_members):
    # Store in memory with team visibility
    memory = await memory_service.store(
        content=f"Team Update ({update_type}): {content}",
        tags=['team-update', update_type, 'shared']
    )
    
    # Notify team members (example with Slack)
    for member in team_members:
        await notify_slack(
            channel=member.slack_id,
            message=f"New {update_type} update stored: {memory.id}"
        )

Performance Optimization

Natural Memory Triggers v7.1.0 Performance

Multi-Tier Performance Configuration:

Configure performance profiles for different workflows:

# Speed-focused for quick coding sessions (< 100ms)
node ~/.claude/hooks/memory-mode-controller.js profile speed_focused

# Balanced for general development (< 200ms, recommended)
node ~/.claude/hooks/memory-mode-controller.js profile balanced

# Memory-aware for architecture work (< 500ms)
node ~/.claude/hooks/memory-mode-controller.js profile memory_aware

# Adaptive learning profile
node ~/.claude/hooks/memory-mode-controller.js profile adaptive

Performance Monitoring:

# Monitor system performance
node ~/.claude/hooks/memory-mode-controller.js metrics

# Check trigger accuracy and timing
node ~/.claude/hooks/memory-mode-controller.js status --verbose

# Test specific queries
node ~/.claude/hooks/memory-mode-controller.js test "What did we decide about authentication?"

Cache Optimization:

# Optimize semantic cache for your usage patterns
node ~/.claude/hooks/memory-mode-controller.js config set performance.cacheSize 75

# Adjust cache cleanup behavior
node ~/.claude/hooks/memory-mode-controller.js config set performance.cacheCleanupThreshold 0.8

# Monitor cache effectiveness
node ~/.claude/hooks/memory-mode-controller.js cache stats

Advanced Performance Tuning:

{
  "naturalTriggers": {
    "triggerThreshold": 0.6,
    "cooldownPeriod": 30000,
    "maxMemoriesPerTrigger": 5
  },
  "performance": {
    "defaultProfile": "balanced",
    "enableMonitoring": true,
    "autoAdjust": true,
    "profiles": {
      "custom_profile": {
        "maxLatency": 250,
        "enabledTiers": ["instant", "fast"],
        "backgroundProcessing": true,
        "description": "Custom optimized profile"
      }
    }
  }
}

Optimize Search Queries

# ❌ Inefficient: Multiple separate searches
results1 = await search("python")
results2 = await search("debugging")
results3 = await search("error")

# ✅ Efficient: Combined search
results = await search("python debugging error")

Use Appropriate Result Limits

# For browsing
results = await search(query, limit=10)

# For existence check
results = await search(query, limit=1)

# For analysis
results = await search(query, limit=50)

Batch Operations

# ❌ Individual operations
for memory in memories:
    await store_memory(memory)

# ✅ Batch operation
await store_memories_batch(memories)

Security Configuration

OAuth 2.1 Configuration (v7.0.0+)

Production OAuth Settings:

# Essential OAuth environment variables
export MCP_OAUTH_ENABLED=true
export MCP_OAUTH_SECRET_KEY="your-secure-256-bit-secret-key"
export MCP_OAUTH_ISSUER="https://your-domain.com"
export MCP_OAUTH_ACCESS_TOKEN_EXPIRE_MINUTES=30    # Shorter for security
export MCP_OAUTH_AUTHORIZATION_CODE_EXPIRE_MINUTES=5

# HTTPS enforcement
export MCP_HTTPS_ENABLED=true
export MCP_SSL_CERT_FILE="/path/to/cert.pem"
export MCP_SSL_KEY_FILE="/path/to/key.pem"

OAuth + API Key Dual Authentication:

# Support both OAuth and legacy API keys
export MCP_OAUTH_ENABLED=true
export MCP_API_KEY="fallback-api-key-for-legacy-clients"

# OAuth takes precedence, API key as fallback
# Useful for gradual migration to OAuth

OAuth Client Management:

# Enable client persistence for production
export MCP_OAUTH_CLIENT_STORAGE="persistent"

# Client registration rate limiting
export MCP_OAUTH_REGISTRATION_RATE_LIMIT="10/hour"

# Enable OAuth audit logging
export MCP_OAUTH_AUDIT_LOG=true

JWT Token Security

Token Configuration:

# Use RS256 for production (requires key pair)
export MCP_JWT_ALGORITHM="RS256"
export MCP_JWT_PRIVATE_KEY_FILE="/path/to/private.pem"
export MCP_JWT_PUBLIC_KEY_FILE="/path/to/public.pem"

# Token security settings
export MCP_JWT_ISSUER="https://your-memory-service.com"
export MCP_JWT_AUDIENCE="mcp-memory-clients"

Token Validation:

# Custom JWT validation example
from jose import jwt, JWTError

async def validate_oauth_token(token: str):
    try:
        payload = jwt.decode(
            token,
            settings.oauth_secret_key,
            algorithms=["HS256"],
            audience=settings.jwt_audience,
            issuer=settings.jwt_issuer
        )
        return payload
    except JWTError:
        raise HTTPException(status_code=401, detail="Invalid token")

Sensitive Information Guidelines

# ❌ Don't store
- Passwords or API keys
- Personal identification numbers
- Credit card information
- Private keys
- OAuth client secrets

# ✅ Store references instead
"AWS credentials stored in vault under key 'prod-api-key'"
"OAuth client configured with environment variables"

Access Control

# Tag sensitive memories
await store_memory(
    content="Architecture decision for payment system",
    tags=["architecture", "payments", "confidential"]
)

# OAuth scope-based access
@require_scope("admin")
async def admin_endpoint():
    return sensitive_data

Data Retention

# Set expiration for temporary data
await store_memory(
    content="Temporary debug log",
    tags=["debug", "temporary"],
    metadata={"expires": "2025-08-01"}
)

# OAuth token cleanup
export MCP_OAUTH_CLEANUP_EXPIRED_TOKENS=true
export MCP_OAUTH_TOKEN_CLEANUP_INTERVAL="1h"

Backup Strategy

  • Regular automated backups
  • Encrypted backup storage
  • Test restore procedures
  • Version control for configurations
  • OAuth client backup: Include client registrations in backups
  • JWT key rotation: Regular key rotation for production

Quick Reference

Essential Commands

# Store with tags
store "content" --tags "tag1,tag2"

# Search recent
search "query" --time "last week"

# Clean up
delete --older-than "6 months" --tag "temporary"

# Export important
export --tag "important" --format json

Common Patterns

# Decision tracking
f"Decision: {title} | Rationale: {why} | Date: {when}"

# Error documentation  
f"Error: {message} | Solution: {fix} | Prevention: {how}"

# Learning capture
f"TIL: {concept} | Source: {where} | Application: {how}"

Integration Best Practices

  1. Use Consistent Tagging: Establish tag conventions for automated entries
  2. Rate Limiting: Implement limits to prevent memory spam
  3. Error Handling: Always handle memory service failures gracefully
  4. Async Operations: Use async patterns to avoid blocking
  5. Batch Operations: Group related memories when possible

These advanced configurations will help you build a powerful, integrated memory system that grows more valuable over time.

MCP Memory Service🏠 Home📚 Docs🔧 Troubleshooting💬 Discussions⭐ Star

Clone this wiki locally