Latent Dictionary: 3D map of Oxford3000+search words via DistilBert embeddings

by ppson 12/30/2023, 1:07 PMwith 42 comments

by minimaxiron 12/30/2023, 7:46 PM

Some notes on how embeddings/DistilBERT embeddings work since the other comments are confused:

1) There are two primary ways to have models generate embeddings: implicitly from an LLM by mean-pooling its last hidden state since it has to learn how to map text in a distinct latent space anyways to work correctly (i.e. DistilBERT), or you can use a model which can generate embeddings directly which are trained using something like triplet loss to explicitly incentivise learning similarity/dissimilarity. Popular text-embedding models like BAAI/bge-large-en-v1.5 tend to use the latter approach.

2) The famous word2Vec examples of e.g. woman + king = queen only work because word2vec is a shallow network and the model learns the word embeddings directly, instead of it being emergent. The latent space still maps them closely as shown with this demo, but there isn't any algebraic intuition. You can get close with algebra but no cigar.

3) DistilBERT is pretty old (2019) and based on a 2018 model trained on Wikipedia and books, so there will be significant text drift in addition to being less robust with newer modeling techniques and a more robust dataset. I do not recommend using it for production applications nowadays.

4) There is an under-discussed opportunity for dimensionality reduction techniques like PCA (which this demo uses to get the data into 3D) to both improve signal-to-noise and improve distinctiveness. I am working on a blog post of a new technique to handle dimensionality reduction for text embeddings better which may have interesting and profound usability implications.

by tikimcfeeon 12/30/2023, 4:50 PM

Edit: I think this is fascinating. If you use words, like dog, electric, life, and human, all of them appear in one mass however, the words like greet, chicken, and “a“ appear in a different mass density section. I think it’s interesting that the words have diverged in location, with some seeming relationship in the way, the words are used. If this were truly random, I would expect those words to be mixed into the other ones.

I have this except you can see every single word in any dictionary at once in space, it renders individual glyphs. It can show an entire dictionary of words - definitions and roots - and let you fly around in them. It’s fun. I built a sample that “plays” a sentence and its definitions. GitHub.com/tikimcfee/LookAtThat The more I see stuff like this, the more i want to complete it. It’s heartening to see so many people fancied with seeing words… I just wish I knew where to find these people to like.. befriend and get better. Im getting the feeling I just kinda exist between worlds of lofty ideas and people that are incredibly smart sticking around other people that are incredibly smart.

by wrsh07on 12/30/2023, 4:25 PM

I wish there were more context and maybe the ability to do math on the vectors

Eg what is the real distance between the two vectors? That should be easy to compute

Similarly: what do I get from summing two vectors and what are some nearby vectors?

Maybe just generally: what are some nearby vectors?

Without any additional context it's just a point cloud with a couple of randomly labeled elements

by granawkinson 12/30/2023, 11:48 PM

Hey guys, I'm the bored SOB who built this. Thanks for the awesome discussion, a lot of you know more about this than I do!

I hadn't planned to keep building this but if I do, what should I add/change?

by chaxoron 12/30/2023, 7:42 PM

Typically these types of single word embedding visualizations work much better with non contextualized models such as the more traditional gensim or w2v approaches, as contextual encoder-based embedding models like BERT don't 'bake in' as much to the token (word) itself, and rather rely on its context to define it. Also, often PCA for contextual models like BERT end up with $PC_0$ aligned with the length of the document.

by kvakkeflyon 12/30/2023, 3:03 PM

By running the same multiple times, I get different visualization. I don't really understand what's going on, but I like the idea of visualizing embeddings.

by thomon 12/30/2023, 2:46 PM

Seems mostly nonsensical, not sure if that's a bug or some deeper point I'm missing.

by pamelafoxon 12/30/2023, 9:59 PM

I’m looking for more resources like this that attempt to visually explain vectors, as I’ll be giving some talks around vector search. Does anyone have related suggestions?

by tetris11on 12/30/2023, 4:51 PM

Interesting that "cromulent" and "hentai" seem to map right next to each other, as well as the words "decorate" and "spare".

by eurekinon 12/30/2023, 2:31 PM

I added those in succession:

> man woman king queen ruler force powerful care

and couldn't reliably determine position of any of them

by smrtinserton 12/30/2023, 6:13 PM

I would love a quickest path between two words. For example between color and colour

by larodion 12/30/2023, 2:46 PM

Is this with some sort of dimensionality reduction of the embedding space?

by cuttysnarkon 12/30/2023, 7:50 PM

edge of the galaxy: 'if when that then wherever where while for'