While I'm 100% on board with RAG using associative memory, I'm not sure you need Neo4J. Associative recall is generally going to be one level deep, and you're doing a top K cut so even if it wasn't the second order associations are probably not going to make the relevance cut. This could be done relationally, and then if you're using pg_vector you could retrieve all your rag contents in one query.
Very interesting, thank you for making this available!
At OpenAdapt (https://github.com/OpenAdaptAI/OpenAdapt) we are looking into using pm4py (https://github.com/pm4py) to extract a process graph from a recording of user actions.
I will look into this more closely. In the meantime, could the authors share their perspective on whether Memary could be useful here?
These new systems would do well to have a compelling “wow, this solves a hard problem that can’t be solved in another straightforward way”.
The current YouTube video has a query about the Dallas Mavericks and it’s not clear how it’s using any of its memory or special machinery to answer the query: https://www.youtube.com/watch?v=GnUU3_xK6bg
I hate when I find a cool AI project and I open the github to read the setup instructions and see "insert OpenAI API key." Nothing will make me loose interest faster.
Looks cool. This is similar to what I'm doing for long-term memory in AISH, but packaged up nicely. Others have pointed out that they're somewhat abusing the term KG. But ... you could imagine other processes poring over the "raw" text chunks and building up a true KG from that.
Sounds promising. Can this system be integrated with the Wikidata knowledge graph instead?
Log4j was so unbelievably slow to load data, bloated, and hard to get working on my corporate managed box that I wasn't too sad when it turned out unable to handle my workload. Then the security team asked me why it was phoning home every 30 seconds. Ugh.
I have since found Kuzu Db, which looks foundationally miles ahead. Plus no jvm. But have not yet given it a shot for rough edges. At the time, it was easier just to stay in plain application code.
Hopefully the workload intended by this tool won't notice the bloat. But it would be nice to be able to dump huge loads of data into this knowledge graph as well, and let the GPT generate queries against it.
How many times are people going to reinvent, rename and resell a database?
This is a really cool project, but is it just me that feels slightly uncomfortable with its name sounding so similar to "mammary"?
How does it compare with zep ai ? Anyone knows ?
This seems like its overloading the term knowledge graph from its origins. Rather than having information and facts encoded into the graph, this appears to be a sort of similarity search over complete responses. It's blog style "related content" links to documents rather than encoded facts.
Searching through their sources, it looks like the problem came from Neo4j's blog post misclassifying "knowledge augmentation" from a Microsoft research paper with "knowledge graph" (because of course they had to add "graph" to the title).
This approach is fine, and probably useful but its not a knowledge graph in the sense that its structure isn't encoding anything about why or how different entities are actually related. A concrete example in a knowledge graph you might have an entity "Joe" and a separate entity "Paris". Joe is currently located in Paris so would have a typed edge between the two entities of something like "LocatedAt".
I didn't dive into the code but what I inferred from the description and referenced literature, it is instead storing complete responses as "entities" and simply doing RAG style similarity searches to other nodes. It's a graph structured search index for sure but not a knowledge graph by the standard definitions.