Immutable Data Structures in Qdrant

by andre-zon 8/21/2024, 6:29 PMwith 13 comments

by ThinkBeaton 8/23/2024, 12:14 PM

I do not understand how you can pack 1536d into 192 bytes ??? That some entirely impossible?

> Vector search, on the other hand, requires reading a lot of small vectors, which might create a large overhead. It is especially noticeable if we use binary quantization, where the size of even large OpenAI 1536d vectors is compressed down to 192 bytes. Dataset size: 2M 768d vectors (~6Gb Raw data), binary quantization, 650Mb of RAM limit. All benchmarks are made with minimal RAM allocation to demonstrate disk cache efficiency.

by wredueon 8/21/2024, 9:26 PM

>Immutable data structures, while tricky to implement correctly, offer significant performance gains, especially for read-heavy systems like search engines. They allow us to take full advantage of hardware optimizations, reduce memory overhead, and improve cache performance.

Nonsense.