-
-
Notifications
You must be signed in to change notification settings - Fork 134
04 Advanced Configuration
Comprehensive guide for advanced MCP Memory Service configuration, integration patterns, and best practices.
❌ Bad: "Fixed the thing with the API"
✅ Good: "Fixed authentication timeout issue in /api/users endpoint by increasing JWT expiration to 24 hours"
❌ Bad: "Use this configuration"
✅ Good: "PostgreSQL connection pool configuration for production - handles 1000 concurrent connections"
❌ Bad: "Updated recently"
✅ Good: "Updated Python to 3.11.4 on 2025-07-20 to fix asyncio performance issue"
# Meeting Notes - API Design Review
Date: 2025-07-20
Attendees: Team Lead, Backend Team
## Decisions:
- RESTful design for public API
- GraphQL for internal services
## Action Items:
- [ ] Create OpenAPI spec
- [ ] Set up API gatewayUse consistent hierarchies:
project: project-alpha
component: project-alpha-frontend
specific: project-alpha-frontend-auth
-
Project/Product:
project-name,product-x -
Technology:
python,react,postgres -
Type:
bug-fix,feature,documentation -
Status:
completed,in-progress,blocked -
Priority:
urgent,high,normal,low
- Use lowercase:
pythonnotPython - Use hyphens:
bug-fixnotbug_fix - Be consistent:
postgresqlnotpostgres/pg/psql - Avoid versions in tags:
pythonnotpython3.11
✅ "What did we decide about authentication last week?"
✅ "Show me all Python debugging sessions"
✅ "Find memories about database optimization"
# Text search for broad matching
general_results = await memory.search("authentication")
# Tag search for precise filtering
tagged_results = await memory.search_by_tag(["auth", "security"])
# Time-based for recent context
recent_results = await memory.recall("last week")Morning:
- Review yesterday's memories
- Tag any untagged entries
- Quick search test
Evening:
- Store key decisions/learnings
- Update task progress
- Run tag consolidation
- Archive completed project memories
- Review and improve poor tags
- Delete test/temporary memories
- Generate weekly summary
- Analyze tag usage statistics
- Merge redundant tags
- Update tagging guidelines
- Performance optimization check
- Backup important memories
Automatically store commit information:
#!/bin/bash
# .git/hooks/post-commit
COMMIT_MSG=$(git log -1 --pretty=%B)
BRANCH=$(git branch --show-current)
FILES=$(git diff-tree --no-commit-id --name-only -r HEAD)
# Store in memory service
echo "Store commit memory: Branch: $BRANCH, Message: $COMMIT_MSG, Files: $FILES" | \
mcp-memory-cli store --tags "git,commit,$BRANCH"Create a command to store code snippets:
// extension.js
vscode.commands.registerCommand('mcp.storeSnippet', async () => {
const editor = vscode.window.activeTextEditor;
const selection = editor.document.getText(editor.selection);
const language = editor.document.languageId;
await mcpClient.storeMemory({
content: `Code snippet:
\`\`\`${language}
${selection}
\`\`\``,
tags: ['code-snippet', language, 'vscode']
});
});Store deployment information:
# .github/workflows/deploy.yml
- name: Store Deployment Memory
run: |
MEMORY="Deployment to ${{ github.event.inputs.environment }}
Version: ${{ github.sha }}
Status: ${{ job.status }}
Timestamp: $(date -u +"%Y-%m-%dT%H:%M:%SZ")"
curl -X POST http://localhost:8080/memory/store \
-H "Content-Type: application/json" \
-d "{\"content\": \"$MEMORY\", \"tags\": [\"deployment\", \"${{ github.event.inputs.environment }}\"]}"Daily summary automation:
# daily_summary.py
import schedule
import asyncio
from datetime import datetime, timedelta
async def daily_memory_summary():
# Collect today's memories
today = datetime.now().date()
memories = await memory_service.recall(f"today")
# Generate summary
summary = f"Daily Summary for {today}:\n"
summary += f"- Total memories: {len(memories)}\n"
summary += f"- Key topics: {extract_topics(memories)}\n"
summary += f"- Completed tasks: {count_completed(memories)}\n"
# Store summary
await memory_service.store(
content=summary,
tags=["daily-summary", str(today)]
)
# Schedule for 6 PM daily
schedule.every().day.at("18:00").do(lambda: asyncio.run(daily_memory_summary()))Automatically capture important events:
// error_logger.js
class MemoryErrorLogger {
constructor(memoryService) {
this.memory = memoryService;
}
async logError(error, context) {
// Store error details
await this.memory.store({
content: `Error: ${error.message}
Stack: ${error.stack}
Context: ${JSON.stringify(context)}`,
tags: ['error', 'automated', context.service]
});
// Check for similar errors
const similar = await this.memory.search(`error ${error.message.split(' ')[0]}`);
if (similar.length > 0) {
console.log('Similar errors found:', similar.length);
}
}
}Simple HTTP interface for memory operations:
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/memory/store', methods=['POST'])
async def store_memory():
data = request.json
result = await memory_service.store(
content=data['content'],
tags=data.get('tags', [])
)
return jsonify({"id": result.id})
@app.route('/memory/search', methods=['GET'])
async def search_memories():
query = request.args.get('q')
results = await memory_service.search(query)
return jsonify([r.to_dict() for r in results])Trigger memory storage from external services:
// webhook_handler.js
app.post('/webhook/github', async (req, res) => {
const { action, pull_request, repository } = req.body;
if (action === 'closed' && pull_request.merged) {
await memoryService.store({
content: `PR Merged: ${pull_request.title}
Repo: ${repository.name}
Files changed: ${pull_request.changed_files}`,
tags: ['github', 'pr-merged', repository.name]
});
}
res.status(200).send('OK');
});Automatically document decisions:
class DecisionLogger:
def __init__(self, memory_service):
self.memory = memory_service
async def log_decision(self, decision_type, title, rationale, alternatives):
content = f"""
Decision: {title}
Type: {decision_type}
Date: {datetime.now().isoformat()}
Rationale: {rationale}
Alternatives Considered:
{chr(10).join(f'- {alt}' for alt in alternatives)}
"""
await self.memory.store(
content=content,
tags=['decision', decision_type, 'architecture']
)Broadcast important updates:
async def share_team_update(update_type, content, team_members):
# Store in memory with team visibility
memory = await memory_service.store(
content=f"Team Update ({update_type}): {content}",
tags=['team-update', update_type, 'shared']
)
# Notify team members (example with Slack)
for member in team_members:
await notify_slack(
channel=member.slack_id,
message=f"New {update_type} update stored: {memory.id}"
)Multi-Tier Performance Configuration:
Configure performance profiles for different workflows:
# Speed-focused for quick coding sessions (< 100ms)
node ~/.claude/hooks/memory-mode-controller.js profile speed_focused
# Balanced for general development (< 200ms, recommended)
node ~/.claude/hooks/memory-mode-controller.js profile balanced
# Memory-aware for architecture work (< 500ms)
node ~/.claude/hooks/memory-mode-controller.js profile memory_aware
# Adaptive learning profile
node ~/.claude/hooks/memory-mode-controller.js profile adaptivePerformance Monitoring:
# Monitor system performance
node ~/.claude/hooks/memory-mode-controller.js metrics
# Check trigger accuracy and timing
node ~/.claude/hooks/memory-mode-controller.js status --verbose
# Test specific queries
node ~/.claude/hooks/memory-mode-controller.js test "What did we decide about authentication?"Cache Optimization:
# Optimize semantic cache for your usage patterns
node ~/.claude/hooks/memory-mode-controller.js config set performance.cacheSize 75
# Adjust cache cleanup behavior
node ~/.claude/hooks/memory-mode-controller.js config set performance.cacheCleanupThreshold 0.8
# Monitor cache effectiveness
node ~/.claude/hooks/memory-mode-controller.js cache statsAdvanced Performance Tuning:
{
"naturalTriggers": {
"triggerThreshold": 0.6,
"cooldownPeriod": 30000,
"maxMemoriesPerTrigger": 5
},
"performance": {
"defaultProfile": "balanced",
"enableMonitoring": true,
"autoAdjust": true,
"profiles": {
"custom_profile": {
"maxLatency": 250,
"enabledTiers": ["instant", "fast"],
"backgroundProcessing": true,
"description": "Custom optimized profile"
}
}
}
}# ❌ Inefficient: Multiple separate searches
results1 = await search("python")
results2 = await search("debugging")
results3 = await search("error")
# ✅ Efficient: Combined search
results = await search("python debugging error")# For browsing
results = await search(query, limit=10)
# For existence check
results = await search(query, limit=1)
# For analysis
results = await search(query, limit=50)# ❌ Individual operations
for memory in memories:
await store_memory(memory)
# ✅ Batch operation
await store_memories_batch(memories)Production OAuth Settings:
# Essential OAuth environment variables
export MCP_OAUTH_ENABLED=true
export MCP_OAUTH_SECRET_KEY="your-secure-256-bit-secret-key"
export MCP_OAUTH_ISSUER="https://your-domain.com"
export MCP_OAUTH_ACCESS_TOKEN_EXPIRE_MINUTES=30 # Shorter for security
export MCP_OAUTH_AUTHORIZATION_CODE_EXPIRE_MINUTES=5
# HTTPS enforcement
export MCP_HTTPS_ENABLED=true
export MCP_SSL_CERT_FILE="/path/to/cert.pem"
export MCP_SSL_KEY_FILE="/path/to/key.pem"OAuth + API Key Dual Authentication:
# Support both OAuth and legacy API keys
export MCP_OAUTH_ENABLED=true
export MCP_API_KEY="fallback-api-key-for-legacy-clients"
# OAuth takes precedence, API key as fallback
# Useful for gradual migration to OAuthOAuth Client Management:
# Enable client persistence for production
export MCP_OAUTH_CLIENT_STORAGE="persistent"
# Client registration rate limiting
export MCP_OAUTH_REGISTRATION_RATE_LIMIT="10/hour"
# Enable OAuth audit logging
export MCP_OAUTH_AUDIT_LOG=trueToken Configuration:
# Use RS256 for production (requires key pair)
export MCP_JWT_ALGORITHM="RS256"
export MCP_JWT_PRIVATE_KEY_FILE="/path/to/private.pem"
export MCP_JWT_PUBLIC_KEY_FILE="/path/to/public.pem"
# Token security settings
export MCP_JWT_ISSUER="https://your-memory-service.com"
export MCP_JWT_AUDIENCE="mcp-memory-clients"Token Validation:
# Custom JWT validation example
from jose import jwt, JWTError
async def validate_oauth_token(token: str):
try:
payload = jwt.decode(
token,
settings.oauth_secret_key,
algorithms=["HS256"],
audience=settings.jwt_audience,
issuer=settings.jwt_issuer
)
return payload
except JWTError:
raise HTTPException(status_code=401, detail="Invalid token")# ❌ Don't store
- Passwords or API keys
- Personal identification numbers
- Credit card information
- Private keys
- OAuth client secrets
# ✅ Store references instead
"AWS credentials stored in vault under key 'prod-api-key'"
"OAuth client configured with environment variables"# Tag sensitive memories
await store_memory(
content="Architecture decision for payment system",
tags=["architecture", "payments", "confidential"]
)
# OAuth scope-based access
@require_scope("admin")
async def admin_endpoint():
return sensitive_data# Set expiration for temporary data
await store_memory(
content="Temporary debug log",
tags=["debug", "temporary"],
metadata={"expires": "2025-08-01"}
)
# OAuth token cleanup
export MCP_OAUTH_CLEANUP_EXPIRED_TOKENS=true
export MCP_OAUTH_TOKEN_CLEANUP_INTERVAL="1h"- Regular automated backups
- Encrypted backup storage
- Test restore procedures
- Version control for configurations
- OAuth client backup: Include client registrations in backups
- JWT key rotation: Regular key rotation for production
# Store with tags
store "content" --tags "tag1,tag2"
# Search recent
search "query" --time "last week"
# Clean up
delete --older-than "6 months" --tag "temporary"
# Export important
export --tag "important" --format json# Decision tracking
f"Decision: {title} | Rationale: {why} | Date: {when}"
# Error documentation
f"Error: {message} | Solution: {fix} | Prevention: {how}"
# Learning capture
f"TIL: {concept} | Source: {where} | Application: {how}"- Use Consistent Tagging: Establish tag conventions for automated entries
- Rate Limiting: Implement limits to prevent memory spam
- Error Handling: Always handle memory service failures gracefully
- Async Operations: Use async patterns to avoid blocking
- Batch Operations: Group related memories when possible
These advanced configurations will help you build a powerful, integrated memory system that grows more valuable over time.
MCP Memory Service • 🏠 Home • 📚 Docs • 🔧 Troubleshooting • 💬 Discussions • ⭐ Star