Improving Text Embeddings with Large Language Models

by cmcollieron 1/2/2024, 6:59 PMwith 6 comments

by binarymaxon 1/2/2024, 8:04 PM

Interesting, but this aspect makes me double-take: "We demonstrate that Mistral-7B, when fine-tuned solely on synthetic data, attains competitive performance on the BEIR [ 40 ] and MTEB [27] benchmarks".

E5/BGE large are an order of magnitude smaller than Mistral-7B. So is this just "bigger model wins" in disguise?

I need to read the whole paper carefully, but this jumped out at me.

by nalzokon 1/2/2024, 10:58 PM

> Subjects: Computation and Language (cs.CL); Information Retrieval (cs.IR)

I'm surprised they didn't put `Machine Learning (cs.LG)` and `Machine Learning (stat.ML)`.

by 3abitonon 1/3/2024, 12:25 AM

I am confused, aren't LLMs already embeddings of text?