Tool Landscape

Agent Memory Tools

A catalog of 348 AI agent memory and context management tools across 17 categories. Last updated March 2026.

348 Tools
17 Categories

Repo-to-Context Converters

23 tools
Context750000

Pulls version-specific documentation and code examples directly into LLM prompts, solving documentation staleness. MCP server and library integration.

GitHub →
Repomix22600

Packs entire repositories into AI-friendly single-file formats. Supports custom output formats, token counting, and .gitignore-aware filtering.

GitHub →
Gitingest14200

Converts any Git repository into a prompt-friendly text digest optimized for LLMs. Web UI and CLI available.

GitHub →
code2prompt (Rust)7200

CLI tool to convert codebases to LLM prompts with token counting, Handlebars templates, and tree-sitter filtering.

GitHub →
code2prompt (Python)1500

Python version of code2prompt with similar codebase-to-prompt capabilities.

GitHub →
files-to-prompt2600

Concatenate files into a single prompt suitable for LLMs. Simple, Unix-philosophy approach by Simon Willison.

GitHub →

File-Based Context Standards & Generators

44 tools
CLAUDE.md

Project-level instructions and context for Claude coding agent

AGENTS.md

Emerging universal standard (Feb 2026+) for agent instructions

.cursorrules

Legacy project rules for Cursor IDE

.cursor/rules/*.mdc

New MDC-format rules for Cursor IDE

GEMINI.md

Project context for Google Gemini CLI agent

CONVENTIONS.md

Code conventions and standards file

Coding Agent Memory Systems

59 tools
claude-mem37000

Persistent memory extension for Claude Code. Stores and retrieves context across coding sessions.

GitHub →
Graphiti24000

Temporally-aware knowledge graph engine for real-time context synthesis. Bi-temporal model tracking event occurrence and ingestion time.

GitHub →
Supermemory17000

Memory and context engine for AI applications. Benchmarked #1 on three memory benchmarks.

GitHub →
QMD16000

Query-based memory database for coding agents. Efficient storage and retrieval of coding context.

GitHub →
Cipher3500

Encrypted persistent memory for coding agents with privacy-preserving retrieval.

GitHub →
SimpleMem3200

Efficient lifelong memory via semantic lossless compression. Three-stage pipeline: compression, synthesis, retrieval.

GitHub →

Context Databases & Unified Context Systems

4 tools
OpenViking17400

ByteDance's open-source context database for AI agents. Unifies management of context (memory, resources, skills) through viking:// filesystem paradigm with L0/L1/L2 tiered loading and self-evolution.

GitHub →
AgentiKit200

Context bridge layer managing skills, scripts, and context for AI coding agents. Integrates with OpenViking for persistent team-wide memory.

GitHub →
Acontext100

Agent Skills as a Memory Layer — treats skills as persistent, queryable context.

GitHub →
MemOS (as unified system)500

Memory Operating System treating context as OS-level resource. Unifies skills, memory, and resource management.

GitHub →

Vector-Based Memory Libraries

19 tools
Supermemory17000

Memory and context engine with vector-based retrieval. Triple benchmark leader.

GitHub →
Memvid3000+

Serverless, single-file memory layer. Packages data, embeddings, indexes, and metadata into portable .mv2 files with hybrid BM25 + vector search.

GitHub →
Redis Agent Memory Server2000

Fast, flexible agent memory using Redis vector similarity search with semantic and keyword retrieval.

GitHub →
Motörhead1000

Memory and information retrieval server for LLMs built in Rust. Incremental summarization with MAX_WINDOW_SIZE management.

GitHub →
Memary800

Open-source memory layer for autonomous agents with automatic memory updates during interaction.

GitHub →
agentmemory (elizaOS)600

ChromaDB + Postgres memory with DBSCAN clustering for semantic organization.

GitHub →

Knowledge Graph Memory Systems

14 tools
Microsoft GraphRAG31700

Entity-centric knowledge graphs with thematic clusters and LLM-precomputed community summaries. Multi-hop graph-based reasoning.

GitHub →
LightRAG29700

Simple and fast RAG using dual-level knowledge graphs. 80ms retrieval vs 120ms standard RAG.

GitHub →
Graphiti / Zep24000

Temporally-aware knowledge graph engine with bi-temporal model. 94.8% DMR benchmark accuracy.

GitHub →
Cognee14400

Knowledge engine enabling knowledge graph construction in 6 lines of code. Multi-backend.

GitHub →
Neo4j agent-memory400

Graph-native memory for conversation-to-knowledge-graph pipelines.

GitHub →
HippoRAG

Neurobiologically inspired long-term memory using hierarchical graphs with structure-aware retrieval.

GitHub →

Standalone Agentic Memory Frameworks

30 tools
Mem050600

Universal memory layer for AI agents. Framework-agnostic design with multi-tier memory (short, medium, long-term).

GitHub →
Letta (formerly MemGPT)21700

Platform for stateful agents with three-tier memory (Core, Recall, Archival). OS-inspired architecture where agents run inside memory.

GitHub →
Second Me15000

Personal memory and context framework for AI agent personalization.

GitHub →
TeleMem1200

High-performance drop-in Mem0 replacement with semantic deduplication, multimodal video reasoning.

GitHub →
MemAgent (ByteDance)800

Long-context optimization via end-to-end RL. Extrapolates from 8K to 3.5M tokens with <5% performance loss.

GitHub →
MemoryOS600

Memory operating system for personalized agents. Hierarchical storage with Storage/Updating/Retrieval/Generation layers.

GitHub →

Agent Framework Memory Systems

40 tools
Dify134000

Open-source LLM application platform combining Backend-as-a-Service with LLMOps. Visual RAG/workflow orchestration with integrated memory.

GitHub →
LangChain / LangGraph / LangMem130000

Modular RAG/agent framework with pluggable memory backends. LangGraph adds graph-based orchestration, LangMem provides structured memory.

GitHub →
MetaGPT65700

Multi-agent framework with role-based architecture. Agents assigned to software engineering roles coordinated via message passing.

GitHub →
Microsoft AutoGen56000

Multi-agent conversation framework with persistent session memory and human-in-the-loop workflows.

GitHub →
n8n AI Agents48000

Workflow automation platform with AI agent capabilities. 600+ integrations.

GitHub →
AnythingLLM56600

All-in-one AI platform with RAG, memory, and workspace management.

GitHub →

MCP-Based Memory Servers

27 tools
@modelcontextprotocol/server-memory (Official)

Official reference memory implementation using knowledge graph. Stores entities, relations, and observations.

GitHub →
Graphiti MCP24000

Temporal knowledge graph MCP server with Neo4j integration and bi-temporal modeling.

GitHub →
Mem0 MCP

Universal memory layer with MCP integration. Multi-tier memory persistence.

GitHub →
cognee-mcp

GraphRAG memory server with customizable ingestion and search. Part of Cognee platform.

GitHub →
A-MEM MCP

Agentic Memory framework exposed as MCP server.

GitHub →
enhanced-mcp-memory

Enhanced version of official MCP memory with additional features.

GitHub →

Cognitive Architectures & Reflection Systems

15 tools
OpenHands69500

AI-driven development platform with agent autonomy for software engineering tasks. Comprehensive state management.

GitHub →
Generative Agents (Stanford Smallville)20900

Pioneering agent architecture with episodic memory, reflection, and planning. Agents simulate believable human behavior.

GitHub →
SWE-agent18800

Software engineering agent with shell/editor interaction for autonomous GitHub issue fixing.

GitHub →
Letta (MemGPT)21700

OS-inspired cognitive architecture with tiered memory management.

GitHub →
JARVIS / HuggingGPT7500

LLM controller coordinating expert models from HuggingFace Hub.

GitHub →
Voyager6800

Minecraft agent with external skill library and lifelong learning.

GitHub →

Enterprise & Institutional Memory

8 tools
Salesforce Agentforce Memory

Enterprise agent memory layer with confidence scoring, write/read gates, and CRM-integrated governance.

GitHub →
Coworker.ai OM1

Organizational Memory tracking 120+ business signals across enterprise tools. First AI agent with deep company context.

GitHub →
Leena AI Context Graph

1,000+ enterprise integrations with HIPAA/GDPR/SOC2 compliance. Multi-tenant context management.

GitHub →
Vectara

Enterprise agentic platform with multimodal indexing, retrieval, and extraction. "Context engineering" paradigm.

GitHub →
Kore.ai

Enterprise conversational AI with memory and context management across channels.

GitHub →
NVIDIA ICMS/CMX

Enterprise memory systems for AI infrastructure. GPU-optimized memory operations.

Context Compression & Engineering Tools

12 tools
LLMLingua5900

Coarse-to-fine prompt compression achieving 20x compression with 1.5% performance loss. Budget controller + token-level iteration.

GitHub →
LongLLMLingua

Long-context compression with 4x ratio + 21.4% improvement. Document reordering + dynamic ratios.

GitHub →
LLMLingua-2

Small BERT-level encoder via GPT-4 distillation. 3-6x faster, task-agnostic.

GitHub →
Selective Context

Semantic token importance ranking for attention-aware compression (part of Semantic Kernel project).

RECOMP

Extractive + abstractive document compression for RAG. 6% compression with minimal loss.

500xCompressor

Generalized extreme compression — compress up to 500 tokens into 1 special token.

GitHub →

Self-Hosted LLM Platforms with Memory

11 tools
Ollama166000

Simple local LLM inference framework. Easiest setup for running models locally.

GitHub →
Dify134000

Open-source LLM platform with integrated RAG, agents, and memory.

GitHub →
Open WebUI128000

User-friendly AI interface for Ollama and OpenAI API. 282M+ downloads.

GitHub →
GPT4All70000

Lightweight local CPU inference — no GPU required.

GitHub →
PrivateGPT57200

Chat with private documents, fully on-premise. Privacy-focused.

GitHub →
AnythingLLM56600

All-in-one AI platform with RAG-centric memory and workspace management.

GitHub →

Benchmarks, Surveys & Awesome Lists

30 tools
LongMemEval

Benchmark for long-term interactive memory. 500 questions testing 5 core abilities across up to 1.5M tokens.

GitHub →
LoCoMo

Long Conversational Memory benchmark for evaluating agent memory across extended dialogues.

GitHub →
ConvoMem

75,336 Q&A pairs — 150x larger than LongMemEval. Enterprise-scale conversational memory evaluation.

MemoryBench (Supermemory)

Unified benchmark integrating LoCoMo, LongMemEval, and ConvoMem.

GitHub →
MemoryAgentBench

ICLR 2026 memory evaluation via incremental multi-turn interactions. Temporal memory evaluation.

GitHub →
DevMemBench

Developer-focused memory benchmark for coding agent evaluation (no public repo confirmed).

Context Engineering & Optimization Platforms

5 tools
get-shit-done (GSD)37800

Meta-prompting and context engineering system for Claude. Systematic approach to agent context.

GitHub →
Agent-Skills-for-Context-Engineering14100

Collection of agent skills specifically designed for context engineering workflows.

GitHub →
context-engineering-intro12800

Introduction to context engineering for AI coding. Tutorial and reference.

GitHub →
Context-Engineering8600

Art and science of filling context windows — frameworks and patterns.

GitHub →
MineContext5100

Proactive context-aware AI partner from ByteDance's Volcano Engine.

GitHub →

Web Scraping & Data Extraction for Agent Context

4 tools
Scrapling31700

Adaptive web scraping framework with MCP integration for agent data ingestion.

GitHub →
Firecrawl5000

Web data API converting websites to LLM-ready markdown. Agent-friendly output.

GitHub →
ScrapeGraphAI5000

Python scraper using LLM and graph logic for structured extraction.

GitHub →
Crawl4AI4000

LLM-friendly web crawler and scraper. Clean markdown output for agent consumption.

GitHub →

Conversation Memory & Chat History

3 tools
ChatMemory600

Simple yet powerful long-term memory manager for conversational AI.

GitHub →
memobase2600

User profile-based long-term memory for AI chatbots. Adaptive profiling.

GitHub →
Spring AI Chat Memory300

Conversational history management with Spring AI framework.

GitHub →
Missing something?

Know a paper, tool, or repo that should be listed here? We want this index to be exhaustive.