100M Token Context Windows

by gklitton 8/29/2024, 5:32 PMwith 22 comments

by shazamion 8/29/2024, 7:53 PM

FYI wouldn't interview here. Got rejected after a 30 minute behavioral screen after spending 8 hours on an unpaid take-home.

by dinoboneson 8/29/2024, 7:54 PM

Long context windows are IMO, “AGI enough.”

100M context window means it can probably store everything you’ve ever told it for years.

Couple this with multimodal capabilities, like a robot encoding vision and audio into tokens, you can get autonomous assistants than learn your house/habits/chores really quickly.

by smusamashahon 8/29/2024, 6:42 PM

It should be benchmarked against something like RULER[1]

1: https://github.com/hsiehjackson/RULER (RULER: What’s the Real Context Size of Your Long-Context Language Models)

by fsndzon 8/29/2024, 8:01 PM

Context windows are becoming larger and larger, and I anticipate more research focusing on this trend. Could this signal the eventual demise of RAG? Only time will tell. I recently experimented with RAG and the limitations are often surprising (https://www.lycee.ai/blog/rag-fastapi-postgresql-pgvector). I wonder if we will see some of the same limitations for long context LLM. In context learning is probably a form of semantic / lexical cues based arithmetic.

by Sakoson 8/29/2024, 8:00 PM

I was wondering how they could afford 8000 H100’s, but I guess I accidentally skipped over this part:

> We’ve raised a total of $465M, including a recent investment of $320 million from new investors Eric Schmidt, Jane Street, Sequoia, Atlassian, among others, and existing investors Nat Friedman & Daniel Gross, Elad Gil, and CapitalG.

Yeah, I guess that'd do it. Who are these people and how'd they convince them to invest that much?

by anonzzzieson 8/30/2024, 2:17 PM

What is the state of art on context on open models? Magic won't be open I guess after getting 500m in VC money.

by samberon 8/29/2024, 6:48 PM

Based on Mamba ?

by htrpon 8/29/2024, 9:45 PM

does anyone have a detailed tech breakdown of these guys? not quite sure how their LTM architecture works.