{"id":3743,"date":"2024-10-02T14:41:09","date_gmt":"2024-10-02T14:41:09","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2024\/10\/02\/brave-new-world-leo-ai-and-ollama-bring-rtx-accelerated-local-llms-to-brave-browser-users\/"},"modified":"2024-10-02T14:41:09","modified_gmt":"2024-10-02T14:41:09","slug":"brave-new-world-leo-ai-and-ollama-bring-rtx-accelerated-local-llms-to-brave-browser-users","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2024\/10\/02\/brave-new-world-leo-ai-and-ollama-bring-rtx-accelerated-local-llms-to-brave-browser-users\/","title":{"rendered":"Brave New World: Leo AI and Ollama Bring RTX-Accelerated Local LLMs to Brave Browser Users"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p><i>Editor\u2019s note: This post is part of the <\/i><a href=\"https:\/\/blogs.nvidia.com\/blog\/tag\/ai-decoded\/\"><i>AI Decoded series<\/i><\/a><i>, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.<\/i><\/p>\n<p>From games and content creation apps to software development and productivity tools, AI is increasingly being integrated into applications to enhance user experiences and boost efficiency.<\/p>\n<p>Those efficiency boosts extend to everyday tasks, like web browsing. <a target=\"_blank\" href=\"https:\/\/brave.com\/\" rel=\"noopener\">Brave<\/a>, a privacy-focused web browser, recently launched a smart AI assistant called <a target=\"_blank\" href=\"https:\/\/brave.com\/leo\/\" rel=\"noopener\">Leo AI<\/a> that, in addition to providing search results, helps users summarize articles and videos, surface insights from documents, answer questions and more.<\/p>\n<figure id=\"attachment_74324\" aria-describedby=\"caption-attachment-74324\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/10\/leo-ai.png\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-74324\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/10\/leo-ai.png\" alt=\"\" width=\"1407\" height=\"900\"><\/a><figcaption id=\"caption-attachment-74324\" class=\"wp-caption-text\">Leo AI helps users summarize articles and videos, surface insights from documents, answer questions and more.<\/figcaption><\/figure>\n<p>The technology behind Brave and other AI-powered tools is a combination of hardware, libraries and ecosystem software that\u2019s optimized for the unique needs of AI.<\/p>\n<h2><b>Why Software Matters<\/b><\/h2>\n<p>NVIDIA GPUs power the world\u2019s AI, whether running in the data center or on a local PC. They contain <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/tensor-cores\/\" rel=\"noopener\">Tensor Cores<\/a>, which are specifically designed to accelerate AI applications like Leo AI through massively parallel number crunching \u2014 rapidly processing the huge number of calculations needed for AI simultaneously, rather than doing them one at a time.<\/p>\n<p>But great hardware only matters if applications can make efficient use of it. The software running on top of GPUs is just as critical for delivering the fastest, most responsive AI experience.<\/p>\n<p>The first layer is the AI inference library, which acts like a translator that takes requests for common AI tasks and converts them to specific instructions for the hardware to run. Popular inference libraries include NVIDIA TensorRT, Microsoft\u2019s DirectML and the one used by Brave and Leo AI via Ollama, called <a target=\"_blank\" href=\"https:\/\/github.com\/ggerganov\/llama.cpp\" rel=\"noopener\">llama.cpp<\/a>.<\/p>\n<p>Llama.cpp is an open-source library and framework. Through CUDA \u2014 the NVIDIA software application programming interface that enables developers to optimize for <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/rtx\/\" rel=\"noopener\">GeForce RTX<\/a> and <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/design-visualization\/technologies\/rtx\/\" rel=\"noopener\">NVIDIA RTX GPUs<\/a> \u2014 provides Tensor Core acceleration for hundreds of models, including popular <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/glossary\/large-language-models\/\" rel=\"noopener\">large language models<\/a> (LLMs) like Gemma, Llama 3, Mistral and Phi.<\/p>\n<p>On top of the inference library, applications often use a local inference server to simplify integration. The inference server handles tasks like downloading and configuring specific AI models so that the application doesn\u2019t have to.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/ollama.com\/\" rel=\"noopener\">Ollama<\/a> is an open-source project that sits on top of llama.cpp and provides access to the library\u2019s features. It supports an ecosystem of applications that deliver local AI capabilities. Across the entire technology stack, NVIDIA works to optimize tools like Ollama for NVIDIA hardware to deliver faster, more responsive AI experiences on RTX.<\/p>\n<p>NVIDIA\u2019s focus on optimization spans the entire technology stack \u2014 from hardware to system software to the inference libraries and tools that enable applications to deliver faster, more responsive AI experiences on RTX.<\/p>\n<h2><b>Local vs. Cloud<\/b><\/h2>\n<p>Brave\u2019s Leo AI can run in the cloud or locally on a PC through Ollama.<\/p>\n<p>There are many benefits to processing inference using a local model. By not sending prompts to an outside server for processing, the experience is private and always available. For instance, Brave users can get help with their finances or medical questions without sending anything to the cloud. Running locally also eliminates the need to pay for unrestricted cloud access. With Ollama, users can take advantage of a wider variety of open-source models than most hosted services, which often support only one or two varieties of the same AI model.<\/p>\n<p>Users can also interact with models that have different specializations, such as bilingual models, compact-sized models, code generation models and more.<\/p>\n<p>RTX enables a fast, responsive experience when running AI locally. Using the Llama 3 8B model with llama.cpp, users can expect responses up to 149 tokens per second \u2014 or approximately 110 words per second. When using Brave with Leo AI and Ollama, this means snappier responses to questions, requests for content summaries and more.<\/p>\n<figure id=\"attachment_74330\" aria-describedby=\"caption-attachment-74330\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/10\/interference-performance.png\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-74330\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/10\/interference-performance.png\" alt=\"\" width=\"1430\" height=\"872\"><\/a><figcaption id=\"caption-attachment-74330\" class=\"wp-caption-text\">NVIDIA internal throughput performance measurements on NVIDIA GeForce RTX GPUs, featuring a Llama 3 8B model with an input sequence length of 100 tokens, generating 100 tokens.<\/figcaption><\/figure>\n<h2><b>Get Started With Brave With Leo AI and Ollama<\/b><\/h2>\n<p>Installing Ollama is easy \u2014 download the installer from the project\u2019s <a target=\"_blank\" href=\"https:\/\/ollama.com\/\" rel=\"noopener\">website<\/a> and let it run in the background. From a command prompt, users can download and install a wide variety of <a target=\"_blank\" href=\"https:\/\/ollama.com\/library\" rel=\"noopener\">supported models<\/a>, then interact with the local model from the command line.<\/p>\n<p>For simple instructions on how to add local LLM support via Ollama, read the <a target=\"_blank\" href=\"https:\/\/brave.com\/blog\/byom-nightly\/\" rel=\"noopener\">company\u2019s blog<\/a>. Once configured to point to Ollama, Leo AI will use the locally hosted LLM for prompts and queries. Users can also switch between cloud and local models at any time.<\/p>\n<figure id=\"attachment_74333\" aria-describedby=\"caption-attachment-74333\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/10\/multi-lora-support.png\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-74333\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/10\/multi-lora-support.png\" alt=\"\" width=\"1407\" height=\"900\"><\/a><figcaption id=\"caption-attachment-74333\" class=\"wp-caption-text\">Brave with Leo AI running on Ollama and accelerated by RTX is a great way to get more out of your browsing experience. You can even summarize and ask questions about AI Decoded blogs!<\/figcaption><\/figure>\n<p>Developers can learn more about how to use Ollama and llama.cpp in the <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/accelerating-llms-with-llama-cpp-on-nvidia-rtx-systems\/\" rel=\"noopener\">NVIDIA Technical Blog<\/a>.<\/p>\n<p><i>Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what\u2019s new and what\u2019s next by subscribing to the <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai-on-rtx\/?modal=subscribe-ai\" rel=\"noopener\"><i>AI Decoded newsletter<\/i><\/a><i>.<\/i><\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/rtx-ai-brave-browser\/<\/p>\n","protected":false},"author":0,"featured_media":3744,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3743"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3743"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3743\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3744"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3743"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3743"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3743"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}