- Brain Scriblr
- Posts
- What is Retrieval Augmented Generation?
What is Retrieval Augmented Generation?
How does RAG affect the results of prompt engineering
News
Following recent news that AI tools can be used for hacking and cracking of security systems Microsoft and OpenAI shut down several accounts linked to hacking activities.
Nature journals are fighting back against fake images. Generative AI is making it difficult to spot fake and actual images in scientific papers.
A deep dive into why the New York Times could win their case against OpenAI.
AI Research
AnyGPT is a multi-modal LLM with discrete sequence modeling.
Prompt Injection is one way that LLMs can be hacked.
AI Tool
Twip AI is a content creation tool.
Jaq n’ Jil ai writer with multiple use cases. On Medium I published a sample output from JaqnJil.
Akool is an AI-powered platform that enables eCommerce businesses to generate images, text, videos, product descriptions, and more.
Book Recommendation
AI as a Service Serverless Machine Learning with AWS is a practical handbook for building and implementing serverless AI applications, without bogging you down with a lot of theory.
DreamStudio prompt » Blue elephant with big floppy ears flying above a circus
Retrieval Augmented Generation or ‘RAG’
Traditional large language models like GPT-3 are trained on vast amounts of text data to learn how to generate fluent and coherent text. However, a major limitation is that they only know what's contained in their training data. This can lead to factual inconsistencies or a lack of specificity when trying to produce text around topics and entities not seen frequently during training.
Retrieval-augmented generation (RAG) models aim to mitigate this limitation. The key idea behind RAG models is to combine a neural text generator with a retrieval system. The retriever module allows finding and retrieving external knowledge context relevant to the original input. This retrieved-context is fused with the original input and fed to the generator model.
More specifically, RAG models contain two main components - a retriever and a generator. The retriever can leverage approaches like sparse vectors and nearest neighbors search to pull relevant knowledge documents or passages from a large corpus or database. The generator is often a seq2seq language model like T5, GPT-2, or BART that has been pre-trained on large unlabeled text.
By conditioning the generator on retrieved external knowledge during text generation, the model can incorporate facts, specifics, and details that may not exist in its training data. This allows the generating of more factual, comprehensive, and grounded text around specialized topics or entities. It combines the fluency of neural language models with the precision and accuracy of retrieved knowledge.
Early RAG models like REALM were focused on question answering but newer implementations apply RAG broadly for text summarization, dialogue, and other language generation tasks. As knowledge bases and retriever models continue advancing, RAG offers an exciting direction to produce text that is both human-like as well as factual. The technique draws on the strengths of neural generative models and robust retrieval of knowledge.
This allows for
Enables Building Apps with Less Training Data: RAG models lower the bar for building usable AI applications like chatbots.
Makes AI Assistants More Factual: Digital assistants powered by RAG can provide more accurate, factual responses instead of just speculation because retrieved knowledge grounds the outputs.
Allows Customizing for Specific Use Cases: The retrievers in RAG can be tailored to company-specific knowledge bases or technical document collections.