alphaXiv

Explore

State of the Art

Sign In

Labs

Feedback

Browser Extension

We're hiring
PaperBlogResources

MemOS: A Memory OS for AI System

BibTex
Copy
@misc{tang2025memosmemoryos,
      title={MemOS: A Memory OS for AI System},
      author={Chen Tang and Wei Xu and Hao Wu and Tong Xu and Jiawei Yang and Zhi-Qin John Xu and Kai Chen and Junchi Yan and Peng Liu and Linfeng Zhang and Tianyi Chen and Wentao Zhang and Ningyu Zhang and Huajun Chen and Zhaoxin Fan and Bo Tang and Hanyu Wang and Shichao Song and Qingchen Yu and Feiyu Xiong and Zhiyu Li and Jiahao Huo and Jihao Zhao and Simin Niu and Siheng Chen and Junpeng Ren and Ding Chen and Hongkang Yang and Zehao Lin and Chenyang Xi and Pengyuan Wang and Yezhaohui Wang and Zhen Tao and Mingchuan Yang and Chunyu Li and Haofeng Wang and Huayi Lai and Kehang Li and Zhenren Wang},
      year={2025},
      eprint={2507.03724},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.03724},
}
GitHub
MemOS
2562
HTTPS
https://github.com/MemTensor/MemOS
SSH
git@github.com:MemTensor/MemOS.git
CLI
gh repo clone MemTensor/MemOS
AI Audio Lecture + Q&A
0:00 / 0:00
MemOS: A Memory OS for AI System
Transcript
John: In our course on Advanced AI Systems, we've been tracking the evolution of memory in large language models. Today's lecture is on 'MemOS: A Memory OS for AI System'. We've seen a lot of recent work trying to solve the context window problem, from 'MemGPT: Towards LLMs as Operating Systems' to more production-focused systems like 'Mem0'. This paper, from a large collaboration including Shanghai Jiao Tong University and MemTensor, pushes that idea further. It argues that we need to stop treating memory as an add-on and start treating it as a core, manageable resource. John: Yes, Noah? Noah: Hi Professor. So is the main distinction here the shift from a tool-based approach, like standard RAG, to a more foundational, system-level architecture? John: Precisely. The authors explicitly frame RAG as a 'stateless workaround'. It retrieves information on the fly, but it doesn't manage it over a lifecycle. MemOS's core contribution is to propose an operating system that governs memory as a first-class citizen, just as a traditional OS manages CPU or RAM. This is built on three key ideas. First, it defines a hierarchy of memory types: Plaintext, which is explicit external knowledge; Activation memory, like the KV-cache for immediate context; and Parameter memory, the knowledge baked into the model weights. Noah: That makes sense. So it’s not just about retrieving text, but also managing the model's internal states as part of the memory system. John: Exactly. And the second key idea is the abstraction that unifies these types: the Memory Cube, or MemCube. Think of a MemCube as a standardized file in this OS. It contains the memory payload—the actual content—but more importantly, it's wrapped in extensive metadata. This metadata includes identifiers like timestamps, governance attributes like access control policies, and behavioral indicators like access frequency. Noah: And I assume that metadata is what drives the system's decisions? It tells the OS how to handle that specific piece of memory? John: That's the third pillar. This metadata enables dynamic memory transformation. The system can automatically decide to move a frequently accessed piece of plaintext memory into faster activation memory by pre-calculating its KV-cache. Or it might archive aging memory, or even fine-tune it into parameter memory for long-term consolidation. It’s a fluid, managed ecosystem, not a static database. John: To make this happen, the architecture is designed in three layers. At the top, an Interface Layer parses requests. At the bottom, an Infrastructure Layer handles storage, security, and governance. But the core logic resides in the middle, the Operation Layer. This is where the system's intelligence lies, particularly in two components: the MemScheduler and the MemLifecycle manager. Noah: How does the MemScheduler decide what memory to load for a given task? Is it just based on semantic similarity? John: It's more sophisticated than that. It performs hybrid retrieval, combining semantic and symbolic strategies, but its main job is to be a resource dispatcher. Based on the task, resource constraints, and the metadata in the MemCubes, it decides what form of memory to inject. For a time-sensitive query, it might prioritize loading a pre-computed KV-cache from activation memory to reduce the time-to-first-token. For a complex reasoning task, it might pull in a rich knowledge graph from plaintext memory. Noah: So that explains their results, where they achieve better performance than a full-context baseline but with much lower latency. The scheduler is making an intelligent trade-off. John: Correct. And the MemLifecycle component complements this by managing memory over time. It models memory as a state machine with states like 'Generated,' 'Activated,' 'Archived,' and 'Expired.' This allows for systematic governance, including versioning and rollback capabilities, which they call the 'Time Machine' feature. Noah: That sounds powerful, especially for enterprise applications where auditability and traceability are critical. Being able to see how a piece of knowledge evolved or was used seems like a major step for safety. John: It is. And that connects to the broader implications. This work is trying to shift the field toward what the authors call a 'Mem-training Paradigm.' The focus moves from simply scaling data and parameters to enabling models to continuously learn and evolve through dynamic memory management. It's about building persistent agents, not just stateless predictors. Noah: This definitely feels like a more complete realization of the idea presented in 'MemGPT'. While MemGPT introduced the OS analogy for managing the context window, MemOS seems to flesh it out into a comprehensive system design with explicit modules for scheduling, governance, and lifecycle control. John: That’s a good way to put it. MemOS formalizes the concept. It even proposes a 'MemStore' and a Memory Interchange Protocol, envisioning a future where memory becomes a composable, tradable asset. An expert could package their knowledge into a set of MemCubes, and developers could 'install' that memory into their agents. This creates a foundation for a more modular and decentralized AI ecosystem. John: So, the key takeaway here is that MemOS reframes memory from a feature or a technical challenge into a foundational system resource that needs to be actively scheduled, governed, and evolved. This architectural shift from a simple memory-augmented model to a full-fledged memory operating system is what enables more capable, persistent, and controllable intelligent systems. John: Thanks for listening. If you have any further questions, ask our AI assistant or drop a comment.