{"id":3843,"date":"2024-12-24T16:48:01","date_gmt":"2024-12-24T16:48:01","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2024\/12\/24\/from-generative-to-agentic-ai-wrapping-the-years-ai-advancements\/"},"modified":"2024-12-24T16:48:01","modified_gmt":"2024-12-24T16:48:01","slug":"from-generative-to-agentic-ai-wrapping-the-years-ai-advancements","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2024\/12\/24\/from-generative-to-agentic-ai-wrapping-the-years-ai-advancements\/","title":{"rendered":"From Generative to Agentic AI, Wrapping the Year\u2019s AI Advancements"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p><i>Editor\u2019s note: This post is part of the <\/i><a href=\"https:\/\/blogs.nvidia.com\/blog\/tag\/ai-decoded\/\"><i>AI Decoded series<\/i><\/a><i>, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.<\/i><\/p>\n<p>The <a href=\"https:\/\/blogs.nvidia.com\/blog\/tag\/ai-decoded\/\">AI Decoded<\/a> series over the past year has broken down all things AI \u2014 from simplifying the complexities of large language models (LLMs) to highlighting the power of RTX AI PCs and workstations.<\/p>\n<p>Recapping the latest AI advancements, this roundup highlights how the technology has changed the way people write, game, learn and connect with each other online.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/rtx\/\" rel=\"noopener\">NVIDIA GeForce RTX GPUs<\/a> offer the power to deliver these experiences on PC laptops, desktops and workstations. They feature specialized AI <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/tensor-cores\/\" rel=\"noopener\">Tensor Cores<\/a> that can deliver more than <a href=\"https:\/\/blogs.nvidia.com\/blog\/ai-decoded-tops\/\">1,300 trillion operations per second<\/a> (TOPS) of processing power for cutting-edge performance in gaming, creating, everyday productivity and more. For workstations, <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/design-visualization\/desktop-graphics\/\" rel=\"noopener\">NVIDIA RTX GPUs<\/a> deliver over 1,400 TOPS, enabling next-level AI acceleration and efficiency.<\/p>\n<h2><b>Unlocking Productivity and Creativity With AI-Powered Chatbots<\/b><\/h2>\n<p>AI Decoded earlier this year <a href=\"https:\/\/blogs.nvidia.com\/blog\/ai-decoded-rtx-pc-llms-chatbots\/\">explored what LLMs are<\/a>, why they matter and how to use them.<\/p>\n<p>For many, tools like ChatGPT were their first introduction to AI. LLM-powered chatbots have transformed computing from basic, rule-based interactions to dynamic conversations. They can suggest vacation ideas, write customer service emails, spin up original poetry and even write code for users.<\/p>\n<\/p>\n<p>Introduced in March, <a href=\"https:\/\/blogs.nvidia.com\/blog\/ai-decoded-gtc-chatrtx-workbench-nim\/\">ChatRTX<\/a> is a demo app that lets users personalize a GPT LLM with their own content, such as documents, notes and images.<\/p>\n<p>With features like retrieval-augmented generation (<a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-retrieval-augmented-generation\/\">RAG<\/a>), NVIDIA TensorRT-LLM and RTX acceleration, ChatRTX enables users to quickly search and ask questions about their own data. And since the app runs locally on RTX PCs or workstations, results are both fast and private.<\/p>\n<p>NVIDIA offers the broadest selection of <a href=\"https:\/\/blogs.nvidia.com\/blog\/ai-decoded-foundation-models\/\">foundation models<\/a> for enthusiasts and developers, including Gemma 2, Mistral and Llama-3. These models can run locally on NVIDIA GeForce and RTX GPUs for fast, secure performance without needing to rely on cloud services.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai-on-rtx\/chatrtx\/\" rel=\"noopener\">Download ChatRTX<\/a> today.<\/p>\n<h2><b>Introducing RTX-Accelerated Partner Applications<\/b><\/h2>\n<p>AI is being incorporated into more and more apps and use cases, including games, content creation apps, software development and productivity tools.<\/p>\n<p>This expansion is fueled by the wide selection of RTX-accelerated developer and community tools, software development kits, models and frameworks have made it easier than ever to run models locally in popular applications.<\/p>\n<p>AI Decoded in October <a href=\"https:\/\/blogs.nvidia.com\/blog\/rtx-ai-brave-browser\/\">spotlighted<\/a> how Brave Browser\u2019s <a target=\"_blank\" href=\"https:\/\/brave.com\/\" rel=\"noopener\">Leo AI<\/a>, powered by NVIDIA RTX GPUs and the open-source Ollama platform, enables users to run local LLMs like Llama 3 directly on their RTX PCs or workstations.<\/p>\n<p>This local setup offers fast, responsive AI performance while keeping user data private \u2014 without relying on the cloud. NVIDIA\u2019s optimizations for tools like Ollama offer accelerated performance for tasks like summarizing articles, answering questions and extracting insights, all directly within the <a target=\"_blank\" href=\"https:\/\/brave.com\/\" rel=\"noopener\">Brave<\/a> browser. Users can switch between local and cloud models, providing flexibility and control over their AI experience.<\/p>\n<p>For simple instructions on how to add local LLM support via <a target=\"_blank\" href=\"https:\/\/ollama.com\/\" rel=\"noopener\">Ollama<\/a>, read <a target=\"_blank\" href=\"https:\/\/brave.com\/blog\/byom-nightly\/\" rel=\"noopener\">Brave\u2019s blog<\/a>. Once configured to point to Ollama, Leo AI will use the locally hosted LLM for prompts and queries.<\/p>\n<h2><b>Agentic AI \u2014 Enabling Complex Problem-Solving<\/b><\/h2>\n<p><a href=\"https:\/\/blogs.nvidia.com\/blog\/what-is-agentic-ai\/\">Agentic AI<\/a> is the next frontier of AI, capable of using sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.<\/p>\n<p>AI Decoded <a href=\"https:\/\/blogs.nvidia.com\/blog\/ai-decoded-agents-anythingllm-rtx-ai\">explored<\/a> how the AI community is experimenting with the technology to create smarter, more capable AI systems.<\/p>\n<p>Partner applications like <a target=\"_blank\" href=\"https:\/\/anythingllm.com\/\" rel=\"noopener\">AnythingLLM <\/a>\u00a0showcase how AI is going beyond simple question-answering to improving productivity and creativity. Users can harness the application to deploy built-in agents that can tackle tasks like searching the web or scheduling meetings.<\/p>\n<figure id=\"attachment_76667\" aria-describedby=\"caption-attachment-76667\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/12\/AI-agent-in-AnythingLLM.png\"><img decoding=\"async\" loading=\"lazy\" class=\"size-large wp-image-76667\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2024\/12\/AI-agent-in-AnythingLLM-1680x1051.png\" alt=\"\" width=\"1680\" height=\"1051\"><\/a><figcaption id=\"caption-attachment-76667\" class=\"wp-caption-text\">Example of a user invoking an AI agent in AnythingLLM to complete a web search query.<\/figcaption><\/figure>\n<p>AnythingLLM lets users interact with documents through intuitive interfaces, automate complex tasks with AI agents and run advanced LLMs locally. Harnessing the power of RTX GPUs, it delivers faster, smarter and more responsive AI workflows \u2014 all within a single local desktop application. The application also works offline and is fast and private, capable of using local data and tools typically inaccessible with cloud-based solutions.<\/p>\n<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/hub.anythingllm.com\" rel=\"noopener\">AnythingLLM\u2019s Community Hub<\/a> lets anyone easily access system prompts that can help them steer LLM behavior, discover productivity-boosting slash commands and build specialized AI agent skills for unique workflows and custom tools.<\/p>\n<p>By enabling users to run agentic AI workflows on their own systems with full privacy, AnythingLLM is fueling innovation and making it easier to experiment with the latest technologies.<\/p>\n<h2><b>AI Decoded Wrapped<\/b><\/h2>\n<p>Over 600 Windows apps and games today are already running AI locally on more than 100 million GeForce RTX AI PCs and workstations worldwide, delivering fast, reliable and low-latency performance. Learn more about <a href=\"https:\/\/blogs.nvidia.com\/blog\/ai-studio-rtx-pc-workstation-advanced\/\">NVIDIA GeForce RTX AI PCs<\/a> and <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai-data-science\/workstations\/\" rel=\"noopener\">NVIDIA RTX AI workstations<\/a>.<\/p>\n<p>Tune into the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/events\/ces\/\" rel=\"noopener\">CES<\/a> keynote delivered by NVIDIA founder and CEO Jensen Huang on Jan. 6. to discover how the latest in\u00a0 AI is supercharging gaming, content creation and development.<\/p>\n<p><i>Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what\u2019s new and what\u2019s next by subscribing to the <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai-on-rtx\/?modal=subscribe-ai\" rel=\"noopener\"><i>AI Decoded newsletter<\/i><\/a><i>.<\/i><\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/ai-decoded-recap-ai-pc-rtx-ai\/<\/p>\n","protected":false},"author":0,"featured_media":3844,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3843"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3843"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3843\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3844"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3843"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3843"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3843"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}