Large language models (LLMs) have advanced the field of artificial
intelligence (AI) and are a powerful enabler for interactive systems. However,
they still face challenges in long-term interactions that require adaptation
towards the user as well as contextual knowledge and understanding of the
ever-changing environment. To overcome these challenges, holistic memory
modeling is required to efficiently retrieve and store relevant information
across interaction sessions for suitable responses. Cognitive AI, which aims to
simulate the human thought process in a computerized model, highlights
interesting aspects, such as thoughts, memory mechanisms, and decision-making,
that can contribute towards improved memory modeling for LLMs. Inspired by
these cognitive AI principles, we propose our memory framework CAIM. CAIM
consists of three modules: 1.) The Memory Controller as the central decision
unit; 2.) the Memory Retrieval, which filters relevant data for interaction
upon request; and 3.) the Post-Thinking, which maintains the memory storage. We
compare CAIM against existing approaches, focusing on metrics such as retrieval
accuracy, response correctness, contextual coherence, and memory storage. The
results demonstrate that CAIM outperforms baseline frameworks across different
metrics, highlighting its context-awareness and potential to improve long-term
human-AI interactions.