Lewis Liu’s Post

View profile for Lewis Liu

Product at Microsoft, ex-Google, Gemini

I started to think about #LLM #memory long time ago, but probably more structurally from reading this early 2024 paper from Princeton: Cognitive Architectures for Language Agents(https://lnkd.in/gaM8mtKa). It outlines different types of memory system: procedural, semantic, episodic etc -- laying a foundation for so much that has come since. We see solutions like #Mem0 and #Zep, which have brought structured, graph-based memory to LLMs. This approach is a clear win, making temporal tracking possible, mimicking how our own brains work. Of course, graph is an enhancement on top of what we commonly use. The field is evolving at an incredible pace, with recent papers like #MemOS, #A-Mem, and #MemTree, pushing the boundaries even further. This makes me wonder: what's next for memory systems? One promising path is creating specialized memory for different types of knowledge, which mirrors human cognition. We all want memory that is universal applicable, personalized and that prioritizes the information we access most frequently. But what does this really mean underneath? The more I think about this, this future looks a lot like a new kind of "#GoogleSearch" — one that not only aggregates unstructured and structured data. The key difference, though, isn't just about reading, it is about continuous writing, reconciliation and consolidation . What if this engine can observe, learn, and write to the our own private data source? We all perceive and listen a lot more before expressing ourselves. Imagine an LLM that doesn't just browse the web as a read-only tool but actively learns and updates its own knowledge base. Building a "Google Search" is absolutely extremely hard, but imagine a new type of "knowledge engine" that put equal weights to "#search" and "#assimilation". That's a powerful next step.

  • map

To view or add a comment, sign in

Explore content categories