{"id":3261,"date":"2023-11-15T17:42:38","date_gmt":"2023-11-15T17:42:38","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2023\/11\/15\/what-is-retrieval-augmented-generation\/"},"modified":"2023-11-15T17:42:38","modified_gmt":"2023-11-15T17:42:38","slug":"what-is-retrieval-augmented-generation","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2023\/11\/15\/what-is-retrieval-augmented-generation\/","title":{"rendered":"What Is Retrieval-Augmented Generation?"},"content":{"rendered":"<div id=\"bsf_rt_marker\">\n<p>To understand the latest advance in <a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/data-science\/generative-ai\/\">generative AI<\/a>, imagine a courtroom.<\/p>\n<p>Judges hear and decide cases based on their general understanding of the law. Sometimes a case \u2014 like a malpractice suit or a labor dispute \u2014\u00a0 requires special expertise, so judges send court clerks to a law library, looking for precedents and specific cases they can cite.<\/p>\n<p>Like a good judge, large language models (<a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/data-science\/large-language-models\/\">LLMs<\/a>) can respond to a wide variety of human queries. But to deliver authoritative answers that cite sources, the model needs an assistant to do some research.<\/p>\n<p>The court clerk of AI is a process called retrieval-augmented generation, or RAG for short.<\/p>\n<h2><b>The Story of the Name<\/b><\/h2>\n<p>Patrick Lewis, lead author of the <a href=\"https:\/\/arxiv.org\/pdf\/2005.11401.pdf\">2020 paper that coined the term<\/a>, apologized for the unflattering acronym that now describes a growing family of methods across hundreds of papers and dozens of commercial services he believes represent the future of generative AI.<\/p>\n<figure id=\"attachment_68128\" aria-describedby=\"caption-attachment-68128\" class=\"wp-caption alignleft\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/11\/Patrick-Lewis-RAG-lead-author.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/11\/Patrick-Lewis-RAG-lead-author-150x150.jpg\" alt=\"Picture of Patrick Lewis, lead author of RAG paper\" width=\"250\" height=\"267\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-68128\" class=\"wp-caption-text\">Patrick Lewis<\/figcaption><\/figure>\n<p>\u201cWe definitely would have put more thought into the name had we known our work would become so widespread,\u201d Lewis said in an interview from Singapore, where he was sharing his ideas with a regional conference of database developers.<\/p>\n<p>\u201cWe always planned to have a nicer sounding name, but when it came time to write the paper, no one had a better idea,\u201d said Lewis, who now leads a RAG team at AI startup Cohere.<\/p>\n<h2><b>So, What Is Retrieval-Augmented Generation?<\/b><\/h2>\n<p>Retrieval-augmented generation is a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources.<\/p>\n<p>In other words, it fills a gap in how LLMs work. Under the hood, LLMs are neural networks, typically measured by how many parameters they contain. An LLM\u2019s parameters essentially represent the general patterns of how humans use words to form sentences.<\/p>\n<p>That deep understanding, sometimes called parameterized knowledge, makes LLMs useful in responding to general prompts at light speed. However, it does not serve users who want a deeper dive into a current or more specific topic.<\/p>\n<h2><b>Combining Internal, External Resources<\/b><\/h2>\n<p>Lewis and colleagues developed retrieval-augmented generation to link generative AI services to external resources, especially ones rich in the latest technical details.<\/p>\n<p>The paper, with coauthors from the former Facebook AI Research (now Meta AI), University College London and New York University, called RAG \u201ca general-purpose fine-tuning recipe\u201d because it can be used by nearly any LLM to connect with practically any external resource.<\/p>\n<h2><b>Building User Trust<\/b><\/h2>\n<p>Retrieval-augmented generation gives models sources they can cite, like footnotes in a research paper, so users can check any claims. That builds trust.<\/p>\n<p>What\u2019s more, the technique can help models clear up ambiguity in a user query. It also reduces the possibility a model will make a wrong guess, a phenomenon sometimes called hallucination.<\/p>\n<p>Another great advantage of RAG is it\u2019s relatively easy. A <a href=\"https:\/\/ai.meta.com\/blog\/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models\/\">blog<\/a> by Lewis and three of the paper\u2019s coauthors said developers can implement the process with as few as <a href=\"https:\/\/huggingface.co\/facebook\/rag-token-nq\">five lines of code<\/a>.<\/p>\n<p>That makes the method faster and less expensive than retraining a model with additional datasets. And it lets users hot-swap new sources on the fly.<\/p>\n<h2><b>How People Are Using Retrieval-Augmented Generation\u00a0<\/b><\/h2>\n<p>With retrieval-augmented generation, users can essentially have conversations with data repositories, opening up new kinds of experiences. This means the applications for RAG could be multiple times the number of available datasets.<\/p>\n<p>For example, a generative AI model supplemented with a medical index could be a great assistant for a doctor or nurse. Financial analysts would benefit from an assistant linked to market data.<\/p>\n<p>In fact, almost any business can turn its technical or policy manuals, videos or logs into resources called knowledge bases that can enhance LLMs. These sources can enable use cases such as customer or field support, employee training and developer productivity.<\/p>\n<p>The broad potential is why companies including <a href=\"https:\/\/aws.amazon.com\/blogs\/machine-learning\/simplify-access-to-internal-information-using-retrieval-augmented-generation-and-langchain-agents\/\">AWS<\/a>, <a href=\"https:\/\/research.ibm.com\/blog\/retrieval-augmented-generation-RAG\">IBM<\/a>, <a href=\"https:\/\/www.glean.com\/\">Glean<\/a>, Google, Microsoft, NVIDIA, <a href=\"https:\/\/www.oracle.com\/artificial-intelligence\/generative-ai\/retrieval-augmented-generation-rag\/\">Oracle<\/a> and <a href=\"https:\/\/www.pinecone.io\/learn\/retrieval-augmented-generation\/\">Pinecone<\/a> are adopting RAG.<\/p>\n<h2><b>Getting Started With Retrieval-Augmented Generation\u00a0<\/b><\/h2>\n<p>To help users get started, NVIDIA developed a <a href=\"https:\/\/docs.nvidia.com\/ai-enterprise\/workflows-generative-ai\/0.1.0\/technical-brief.html\">reference architecture for retrieval-augmented generation<\/a>. It includes a sample chatbot and the elements users need to create their own applications with this new method.<\/p>\n<p>The workflow uses <a href=\"https:\/\/www.nvidia.com\/en-us\/ai-data-science\/generative-ai\/nemo-framework\/\">NVIDIA NeMo<\/a>, a framework for developing and customizing generative AI models, as well as software like <a href=\"https:\/\/www.nvidia.com\/en-us\/ai-data-science\/products\/triton-inference-server\/\">NVIDIA Triton Inference Server<\/a> and <a href=\"https:\/\/developer.nvidia.com\/blog\/optimizing-inference-on-llms-with-tensorrt-llm-now-publicly-available\/\">NVIDIA TensorRT-LLM<\/a> for running generative AI models in production.<\/p>\n<p>The software components are all part of <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/products\/ai-enterprise\/\">NVIDIA AI Enterprise<\/a>, a software platform that accelerates development and deployment of production-ready AI with the security, support and stability businesses need.<\/p>\n<p>Getting the best performance for RAG workflows requires massive amounts of memory and compute to move and process data. The <a href=\"https:\/\/nvidianews.nvidia.com\/news\/gh200-grace-hopper-superchip-with-hbm3e-memory\">NVIDIA GH200 Grace Hopper Superchip<\/a>, with its 288GB of fast HBM3e memory and 8 petaflops of compute, is ideal \u2014 it can deliver a 150x speedup over using a CPU.<\/p>\n<p>Once companies get familiar with RAG, they can combine a variety of off-the-shelf or custom LLMs with internal or external knowledge bases to create a wide range of assistants that help their employees and customers.<\/p>\n<p>RAG doesn\u2019t require a data center. LLMs are debuting on Windows PCs, thanks to NVIDIA software that enables all sorts of applications users can access even on their laptops.<\/p>\n<figure id=\"attachment_68134\" aria-describedby=\"caption-attachment-68134\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/11\/Using-RAG-on-PCs.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/11\/Using-RAG-on-PCs.jpg\" alt=\"Chart shows running RAG on a PC\" width=\"1200\" height=\"659\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-68134\" class=\"wp-caption-text\">An example application for RAG on a PC.<\/figcaption><\/figure>\n<p>PCs equipped with NVIDIA RTX GPUs can now run some AI models locally. By using RAG on a PC, users can link to a private knowledge source \u2013 whether that be emails, notes or articles \u2013 to improve responses. The user can then feel confident that their data source, prompts and response all remain private and secure.<\/p>\n<p>A<a href=\"https:\/\/nam11.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fblogs.nvidia.com%2Fblog%2F2023%2F10%2F17%2Ftensorrt-llm-windows-stable-diffusion-rtx%2F&amp;data=05%7C01%7Crmerritt%40nvidia.com%7Cdfd2267cb8344597f73408dbd0e6fc36%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638333462716560839%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=V5FClNyBybzOnm3%2FKxiTT4i9aoT0GdLWzdLfnF8LBk0%3D&amp;reserved=0\"> recent blog<\/a> provides an example of RAG accelerated by TensorRT-LLM for Windows to get better results fast.<\/p>\n<h2><b>The History of Retrieval-Augmented Generation\u00a0<\/b><\/h2>\n<p>The roots of the technique go back at least to the early 1970s. That\u2019s when researchers in information retrieval prototyped what they called question-answering systems, apps that use natural language processing (<a href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/natural-language-processing\/\">NLP<\/a>) to access text, initially in narrow topics such as baseball.<\/p>\n<p>The concepts behind this kind of text mining have remained fairly constant over the years. But the machine learning engines driving them have grown significantly, increasing their usefulness and popularity.<\/p>\n<p>In the mid-1990s, the Ask Jeeves service, now Ask.com, popularized question answering with its mascot of a well-dressed valet. IBM\u2019s Watson became a TV celebrity in 2011 when it handily beat two human champions on the <i>Jeopardy!<\/i> game show.<\/p>\n<p><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/11\/Ask-Jeeves-2.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/11\/Ask-Jeeves-2.jpg\" alt=\"Picture of Ask Jeeves, an early RAG-like web service\" width=\"620\" height=\"334\"><\/p>\n<p><\/a><\/p>\n<p>Today, LLMs are taking question-answering systems to a whole new level.<\/p>\n<h2><b>Insights From a London Lab<\/b><\/h2>\n<p>The seminal 2020 paper arrived as Lewis was pursuing a doctorate in NLP at University College London and working for Meta at a new London AI lab. The team was searching for ways to pack more knowledge into an LLM\u2019s parameters and using a benchmark it developed to measure its progress.<\/p>\n<p>Building on earlier methods and inspired by <a href=\"https:\/\/arxiv.org\/pdf\/2002.08909.pdf\">a paper<\/a> from Google researchers, the group \u201chad this compelling vision of a trained system that had a retrieval index in the middle of it, so it could learn and generate any text output you wanted,\u201d Lewis recalled.<\/p>\n<figure id=\"attachment_68146\" aria-describedby=\"caption-attachment-68146\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/11\/IBM-Watson-wins-Jeopardy-YT.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/11\/IBM-Watson-wins-Jeopardy-YT.jpg\" alt='Picture of IBM Watson winning on \"Jeopardy\" TV show, popularizing a RAG-like AI service' width=\"1280\" height=\"720\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-68146\" class=\"wp-caption-text\">The IBM Watson question-answering system became a celebrity when it won big on the TV game show Jeopardy!<\/figcaption><\/figure>\n<p>When Lewis plugged into the work in progress a promising retrieval system from another Meta team, the first results were unexpectedly impressive.<\/p>\n<p>\u201cI showed my supervisor and he said, \u2018Whoa, take the win. This sort of thing doesn\u2019t happen very often,\u2019 because these workflows can be hard to set up correctly the first time,\u201d he said.<\/p>\n<p>Lewis also credits major contributions from team members Ethan Perez and Douwe Kiela, then of New York University and Facebook AI Research, respectively.<\/p>\n<p>When complete, the work, which ran on a cluster of NVIDIA GPUs, showed how to make generative AI models more authoritative and trustworthy. It\u2019s since been cited by hundreds of papers that amplified and extended the concepts in what continues to be an active area of research.<\/p>\n<h2><b>How Retrieval-Augmented Generation Works<\/b><\/h2>\n<p>At a high level, here\u2019s how an <a href=\"https:\/\/docs.nvidia.com\/ai-enterprise\/workflows-generative-ai\/0.1.0\/technical-brief.html\">NVIDIA technical brief<\/a> describes the RAG process.<\/p>\n<p>When users ask an LLM a question, the AI model sends the query to another model that converts it into a numeric format so machines can read it. The numeric version of the query is sometimes called an embedding or a vector.<\/p>\n<figure id=\"attachment_68152\" aria-describedby=\"caption-attachment-68152\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/11\/NVIDIA-RAG-diagram-scaled.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/11\/NVIDIA-RAG-diagram-scaled.jpg\" alt=\"NVIDIA diagram of how RAG works with LLMs\" width=\"2048\" height=\"901\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-68152\" class=\"wp-caption-text\">Retrieval-augmented generation combines LLMs with embedding models and vector databases.<\/figcaption><\/figure>\n<p>The embedding model then compares these numeric values to vectors in a machine-readable index of an available knowledge base. When it finds a match or multiple matches, it retrieves the related data, converts it to human-readable words and passes it back to the LLM.<\/p>\n<p>Finally, the LLM combines the retrieved words and its own response to the query into a final answer it presents to the user, potentially citing sources the embedding model found.<\/p>\n<h2><b>Keeping Sources Current<\/b><\/h2>\n<p>In the background, the embedding model continuously creates and updates machine-readable indices, sometimes called vector databases, for new and updated knowledge bases as they become available.<\/p>\n<figure id=\"attachment_68149\" aria-describedby=\"caption-attachment-68149\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/11\/LangChain-2-LLM-with-a-retriveal-process.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/11\/LangChain-2-LLM-with-a-retriveal-process-672x268.jpg\" alt=\"Chart of a RAG process described by LangChain\" width=\"672\" height=\"268\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-68149\" class=\"wp-caption-text\">Retrieval-augmented generation combines LLMs with embedding models and vector databases.<\/figcaption><\/figure>\n<p>Many developers find LangChain, an open-source library, can be particularly useful in chaining together LLMs, embedding models and knowledge bases. NVIDIA uses LangChain in its reference architecture for retrieval-augmented generation.<\/p>\n<p>The LangChain community provides its own <a href=\"https:\/\/blog.langchain.dev\/tutorial-chatgpt-over-your-data\/\">description of a RAG process<\/a>.<\/p>\n<p>Looking forward, the future of generative AI lies in creatively chaining all sorts of LLMs and knowledge bases together to create new kinds of assistants that deliver authoritative results users can verify.<\/p>\n<p>Get a hands on using retrieval-augmented generation with an AI chatbot in this <a href=\"https:\/\/www.nvidia.com\/en-us\/launchpad\/ai\/generative-ai-knowledge-base-chatbot\/\">NVIDIA LaunchPad lab<\/a>.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/what-is-retrieval-augmented-generation\/<\/p>\n","protected":false},"author":0,"featured_media":3262,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3261"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3261"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3261\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3262"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3261"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3261"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3261"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}