This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/jsonathan on 2025-02-20 00:17:48+00:00.


RAG is suspiciously inelegant. Something about using traditional IR techniques to fetch context for a model feels… early-stage. It reminds me of how Netflix had to mail DVDs before the internet was good enough for streaming.

I just can’t imagine LLMs working with databases this way in the future. Why not do retrieval during inference, instead of before? E.g. if the database was embedded directly in the KV cache, then retrieval could be learned via gradient descent just like everything else. This at least seems more elegant to me than using (low-precision) embedding search to gather and stuff chunks of context into a prompt.

And FWIW I don’t think long context models are the future, either. There’s the lost-in-the-middle effect, and the risk of context pollution, where irrelevant context will degrade performance even if all the correct context is also present. Reasoning performance also degrades as more context is added.

Regardless of what the future looks like, my sense is that RAG will become obsolete in a few years. What do y’all think?

EDIT: DeepMind’s RETRO and Self-RAG seem relevant.