Keftek

Recursive Language Models: Processing 10M+ Tokens Without Context Rot

MIT framework lets any LLM process inputs 100x beyond its context window via recursive self-calls — solves long-doc bottleneck.

LLM Infra
Read original on arXiv