{"id":4505,"date":"2026-03-17T15:55:38","date_gmt":"2026-03-17T15:55:38","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2026\/03\/17\/gtc-spotlights-nvidia-rtx-pcs-and-dgx-sparks-running-latest-open-models-and-ai-agents-locally\/"},"modified":"2026-03-17T15:55:38","modified_gmt":"2026-03-17T15:55:38","slug":"gtc-spotlights-nvidia-rtx-pcs-and-dgx-sparks-running-latest-open-models-and-ai-agents-locally","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2026\/03\/17\/gtc-spotlights-nvidia-rtx-pcs-and-dgx-sparks-running-latest-open-models-and-ai-agents-locally\/","title":{"rendered":"GTC Spotlights NVIDIA RTX PCs and DGX Sparks Running Latest Open Models and AI Agents Locally"},"content":{"rendered":"<div>\n<p><span>The paradigm of consumer computing has revolved around the concept of a personal device \u2014 from PCs to smartphones and tablets. Now, generative AI \u2014 particularly OpenClaw \u2014 has introduced a new category: agent computers. These devices, like the NVIDIA DGX Spark desktop AI supercomputer or dedicated NVIDIA RTX PCs, are ideal for running personal agents \u2014 privately and for free.\u00a0<\/span><\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/gtc\/\" rel=\"noopener\"><span>NVIDIA GTC<\/span><\/a><span>, running<\/span><span> this week, is showcasing a host of agentic AI announcements including:<\/span><\/p>\n<ul>\n<li><span>New open models for local agents, including NVIDIA Nemotron 3 Nano 4B and Nemotron 3 Super 120B, and optimizations for Qwen 3.5 and Mistral Small 4.<\/span><\/li>\n<li><span>NVIDIA NemoClaw, an open source stack for OpenClaw that optimizes OpenClaw experiences on NVIDIA devices by increasing security and supporting local models.\u00a0<\/span><\/li>\n<li><span>Easier fine\u2011tuning with Unsloth Studio<\/span> <span>to further improve open model accuracy for agentic workflows.<\/span><\/li>\n<\/ul>\n<p><span>In-person GTC attendees can swing by the <\/span><a href=\"https:\/\/blogs.nvidia.com\/blog\/gtc-2026-news\/#build-a-claw\"><span>NVIDIA build-a-claw event<\/span><\/a><span> in the GTC Park, running daily through March 19, from 8 a.m.-5 p.m. NVIDIA experts will help guests customize and deploy a proactive, always-on AI assistant using their device of choice. Whether technical or just curious, participants will name their agent, define its personality and grant it access to the tools it needs \u2014 creating a personal assistant reachable from their preferred messaging app.<\/span><\/p>\n<h2><b>New Open Models Bring Cloud-Level Quality to Local Agents\u00a0<\/b><\/h2>\n<p><span>The next generation of local models \u2014 with increasingly large context windows \u2014 delivers the intelligence to run agents on PC. Combined with richer user context and powerful local tools, these advances are unlocking new possibilities on AI PCs, especially on DGX Spark, with its 128GB of unified memory that supports models with more than 120 billion parameters.<\/span><\/p>\n<p><a href=\"https:\/\/blogs.nvidia.com\/blog\/nemotron-3-super-agentic-ai\/\"><b>Nemotron 3 Super<\/b><\/a><span>, released last week, is a 120\u2011billion\u2011parameter open model with 12 billion active parameters, designed to run complex agentic AI systems. Nemotron 3 Super is optimal for powering agents on the DGX Spark or NVIDIA RTX PRO workstations. On <\/span><a target=\"_blank\" href=\"https:\/\/pinchbench.com\/?score=best\" rel=\"noopener\"><span>PinchBench<\/span><\/a><span> \u2014 a new benchmark for determining how well large language models perform with OpenClaw \u2014 Nemotron 3 Super scored 85.6%, making it the top open model in its class.<\/span><\/p>\n<p><b>Mistral Small 4<\/b><span>, a 119-billion-parameter open model with 6 billion active parameters \u2014 8 billion including all layers \u2014 unifies the capabilities of Mistral\u2019s flagship models. Users now have an ultraefficient model optimized for general chat, coding and agentic tasks.<\/span><\/p>\n<p><span>Both of these models run locally on DGX Spark and RTX PRO GPUs.<\/span><\/p>\n<p><span>For GeForce RTX users looking for smaller models, <\/span><b>Nemotron 3 Nano 4B<\/b><span> is the latest model to join the <\/span><a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-debuts-nemotron-3-family-of-open-models\" rel=\"noopener\"><span>NVIDIA Nemotron 3 family of open models<\/span><\/a><span>, providing a compact, capable starting point for building agents and assistants locally on RTX AI PCs. The model is a strong fit for building action-taking conversational personas in games and apps that run on resource-constrained hardware. It\u2019s available across any NVIDIA GPU-enabled system and combines state-of-the-art instruction-following and exceptional tool use with minimal VRAM footprint.\u00a0<\/span><\/p>\n<p><span>In addition, NVIDIA announced optimizations for <\/span><b>Alibaba\u2019s Qwen 3.5 models<\/b><span>,<\/span> <span>which have demonstrated outstanding accuracy (<\/span><a target=\"_blank\" href=\"https:\/\/huggingface.co\/Qwen\/Qwen3.5-27B\" rel=\"noopener\"><span>27B<\/span><\/a><span>, <\/span><a target=\"_blank\" href=\"https:\/\/huggingface.co\/Qwen\/Qwen3.5-9B\" rel=\"noopener\"><span>9B<\/span><\/a><span> and <\/span><a target=\"_blank\" href=\"https:\/\/huggingface.co\/Qwen\/Qwen3.5-4B\" rel=\"noopener\"><span>4B<\/span><\/a><span>) and are suited for running local agents on NVIDIA GPUs. The new models natively support vision, multi-token prediction and a large 262,000-token context window. The dense 27-billion-parameter model excels when paired with an RTX 5090 GPU.<\/span><\/p>\n<figure id=\"attachment_91182\" aria-describedby=\"caption-attachment-91182\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2026\/03\/rtx-ai-pc-raig-blog-perf-chart-desktop-light@2x.png\"><img decoding=\"async\" loading=\"lazy\" class=\"size-large wp-image-91182\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2026\/03\/rtx-ai-pc-raig-blog-perf-chart-desktop-light@2x-1680x819.png\" alt=\"\" width=\"1200\" height=\"585\"><\/a><figcaption id=\"caption-attachment-91182\" class=\"wp-caption-text\"><em>All configurations measured using Q4_K_M quantizations BS = 1, ISL = 1024 and OSL = 128 on NVIDIA RTX 5090 and Mac M3 Ultra desktops. Token generation throughput measured on llama.cpp b7789, using the llama-bench tool.<\/em><\/figcaption><\/figure>\n<p><span>Users can try these models today via Ollama, LM Studio and llama.cpp, with accelerated inference powered by RTX GPUs and DGX Spark. Learn more about the latest on <\/span><a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-expands-open-model-families-to-power-the-next-wave-of-agentic-physical-and-healthcare-ai\" rel=\"noopener\"><span>NVIDIA open models<\/span><\/a><span>.\u00a0<\/span><\/p>\n<h2><b>Faster Creative AI With the Latest RTX-Optimized Models<\/b><\/h2>\n<p><span>LTX 2.3, Lightricks\u2019 state-of-the-art audio-video model, released earlier this month, now has support for <\/span><a target=\"_blank\" href=\"https:\/\/huggingface.co\/Lightricks\/LTX-2.3-nvfp4\" rel=\"noopener\"><span>NVFP4<\/span><\/a><span> and <\/span><a target=\"_blank\" href=\"https:\/\/huggingface.co\/Lightricks\/LTX-2.3-fp8\" rel=\"noopener\"><span>FP8<\/span><\/a><span> distilled models, accelerating performance by 2.1x. Learn more about <\/span><a target=\"_blank\" href=\"https:\/\/ltx.io\/model\/model-blog\/ltx-2-3-release\" rel=\"noopener\"><span>Lightricks\u2019 LTX 2.3 model<\/span><\/a><span>.<\/span><\/p>\n<\/p>\n<p><span>In addition, Black Forest Lab\u2019s FLUX.2 Klein 9B received an update last week, accelerating image editing by up to 2x. NVIDIA has collaborated with Black Forest Labs to release an <\/span><a target=\"_blank\" href=\"https:\/\/huggingface.co\/black-forest-labs\/FLUX.2-klein-9b-kv\" rel=\"noopener\"><span>FP8 version<\/span><\/a><span>, optimized for the fastest performance and optimal memory consumption on RTX GPUs.\u00a0<\/span><\/p>\n<h2><b>NVIDIA NemoClaw \u2014 NVIDIA Optimizations for OpenClaw<\/b><\/h2>\n<p><span>AI developers and enthusiasts are buying DGX Spark supercomputers or building dedicated RTX PCs to run autonomous AI agents, such as OpenClaw, that draw context from personal files, apps and workflows and can automate daily tasks. However, as adoption of agentic systems like OpenClaw grows, so do concerns about token costs, as well as security and privacy.<\/span><\/p>\n<p><span>To help address these concerns, NVIDIA this week introduced <\/span><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai\/nemoclaw\/\" rel=\"noopener\"><span>NemoClaw<\/span><\/a><span>, an open source stack for OpenClaw that deploys optimizations for OpenClaw on NVIDIA devices. The first features available in NemoClaw are NVIDIA Nemotron open models and the NVIDIA OpenShell runtime. Nemotron local models enable users to run inference locally, which means better privacy and no token costs. OpenShell is the the runtime designed for executing claws more safely.<\/span><\/p>\n<p><span>Learn more about<\/span> <a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-announces-nemoclaw\" rel=\"noopener\"><span>NemoClaw<\/span><\/a><span>. Watch the<\/span> <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/gtc\/keynote\/\" rel=\"noopener\"><span>GTC keynote<\/span><\/a><span> from NVIDIA founder and CEO Jensen Huang and explore <\/span><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/gtc\/session-catalog\/\" rel=\"noopener\"><span>sessions<\/span><\/a><i><span>.<\/span><\/i><\/p>\n<h2><b>Fine-Tuning Made Easy With Unsloth Studio<\/b><\/h2>\n<p><span>As open models make giant leaps, one way of further improving accuracy is fine-tuning, which allows users to customize a model for their own data and use cases. This technique normally requires in-depth technical expertise, coding knowledge and massive amounts of configuration. Unsloth, a leading open source library for model fine-tuning and alignment, today launched Unsloth Studio, an easy-to-use, web-based user interface that simplifies the fine-tuning process for AI enthusiasts and developers.<\/span><\/p>\n<\/p>\n<p><span>Unsloth Studio offers support for more than 500 AI models. The simple user interface makes the training and fine-tuning process easy: Users can just drop in their dataset, tap the graph-based canvas to generate additional high-quality synthetic data and start the fine-tuning job. It supports quantized low-rank adaptation, low-rank adaptation and full fine-tuning. As the model is being fine-tuned, users can monitor and visualize job progress. Finally, they can export the model into a framework of choice and chat away, all within the same web app.\u00a0<\/span><\/p>\n<p><span>Unsloth Studio\u2019s new interface is built on the Unsloth library, which delivers up to 2x faster training with up to 70% VRAM savings, using custom and specialized GPU kernels. This means that new users can get the most out of their NVIDIA RTX GPUs and DGX Spark, right out of the box.\u00a0<\/span><\/p>\n<p><span>Try <\/span><a target=\"_blank\" href=\"https:\/\/github.com\/unslothai\/unsloth-studio\/tree\/main\/unsloth_studio\" rel=\"noopener\"><span>Unsloth Studio today<\/span><\/a><span>, including with new models like Nemotron 3 Nano 4B and Qwen 3.5. Check out other <\/span><a href=\"https:\/\/blogs.nvidia.com\/blog\/rtx-ai-garage-fine-tuning-unsloth-dgx-spark\/\"><span>RTX AI Garage<\/span><\/a><span> posts for more information on fine-tuning models with NVIDIA GeForce RTX GPUs.<\/span><\/p>\n<h2><b>#ICYMI From GTC 2026<\/b><\/h2>\n<p><span>\u2728<\/span><b>RTX AI<\/b> <b>video generation guide featuring RTX Video in ComfyUI: <\/b><span>Launched at CES earlier this year, the new <\/span><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/news\/rtx-ai-video-generation-guide\/\" rel=\"noopener\"><span>RTX AI video generation guide<\/span><\/a><span> shows creators and enthusiasts how to go from concept to creation using guided text-to-image workflows to produce keyframes for AI-generated videos, then upscale to 4K with RTX Video technology running on local GPUs. Get started with the guide and share creations on social media with #AIonRTX.<\/span><\/p>\n<p><span>????<\/span><a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/maxine?sortBy=developer_learning_library%2Fsort%2Ftitle%3Aasc\" rel=\"noopener\"><b>NVIDIA AI for Media<\/b><\/a><span> is a set of high\u2011performance, easy\u2011to\u2011use software development kits that bring NVIDIA Broadcast-class AI effects \u2014 enhanced audio (<\/span><a target=\"_blank\" href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/teams\/maxine\/collections\/maxine_linux_audio_effects_sdk_collection\" rel=\"noopener\"><span>Linux<\/span><\/a><span> or <\/span><a target=\"_blank\" href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/teams\/maxine\/collections\/maxine_windows_audio_effects_sdk_collection\" rel=\"noopener\"><span>Windows<\/span><\/a><span>), <\/span><a target=\"_blank\" href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/teams\/maxine\/collections\/maxine_vfx_sdk\" rel=\"noopener\"><span>video<\/span><\/a><span> and <\/span><a target=\"_blank\" href=\"https:\/\/catalog.ngc.nvidia.com\/orgs\/nvidia\/teams\/maxine\/collections\/maxine_ar_sdk\" rel=\"noopener\"><span>augmented-reality<\/span><\/a><span> features \u2014 to live media, video conferencing and post\u2011production workflows. The latest update \u2014 available today \u2014 adds more accurate lip-syncing, multi\u2011active-speaker detection, faster 4K upscaling on RTX PRO and GeForce RTX 40 and 50 Series GPUs via the RTX Video Super Resolution feature, better background noise reduction and lower latency for the NVIDIA Studio Voice feature.<\/span><\/p>\n<p><span>???? <\/span><a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-dlss-5-delivers-ai-powered-breakthrough-in-visual-fidelity-for-games\" rel=\"noopener\"><b>NVIDIA DLSS 5<\/b><\/a><span>, arriving this fall, delivers an AI-powered breakthrough in visual fidelity for games by infusing pixels with photoreal lighting and materials to bridge the gap between rendering and reality.<\/span><\/p>\n<p><span>????<\/span><b>Maxon released Redshift 2026.4<\/b><span>, introducing a new real-time visualization workflow powered by DLSS to allow architects to walk through projects at interactive speed and quality. \u201cNVIDIA\u2019s DLSS technology is a critical component, allowing us to deliver high-quality visuals at interactive speeds,\u201d said Philip Losch, chief technology and AI officer at Maxon.<\/span><\/p>\n<p><b>????Reincubate Camo has added Windows ML on NVIDIA TensorRT RTX EP <\/b><span>for AI Autotune in its Camo Streamlight app, significantly improving performance on RTX GPUs.<\/span><\/p>\n<p><i><span>Plug in to NVIDIA AI PC on <\/span><\/i><a target=\"_blank\" href=\"https:\/\/www.facebook.com\/NVIDIA.AI.PC\/\" rel=\"noopener\"><i><span>Facebook<\/span><\/i><\/a><i><span>, <\/span><\/i><a target=\"_blank\" href=\"https:\/\/www.instagram.com\/nvidia.ai.pc\/\" rel=\"noopener\"><i><span>Instagram<\/span><\/i><\/a><i><span>, <\/span><\/i><a target=\"_blank\" href=\"https:\/\/www.tiktok.com\/@nvidia_ai_pc\" rel=\"noopener\"><i><span>TikTok<\/span><\/i><\/a><i><span> and <\/span><\/i><a target=\"_blank\" href=\"https:\/\/x.com\/NVIDIA_AI_PC\" rel=\"noopener\"><i><span>X<\/span><\/i><\/a><i><span> \u2014 and stay informed by subscribing to the <\/span><\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai-on-rtx\/?modal=subscribe-ai\" rel=\"noopener\"><i><span>RTX AI PC newsletter<\/span><\/i><\/a><i><span>.<\/span><\/i><\/p>\n<p><i><span>Follow NVIDIA Workstation on <\/span><\/i><a target=\"_blank\" href=\"https:\/\/www.linkedin.com\/showcase\/3761136\/\" rel=\"noopener\"><i><span>LinkedIn<\/span><\/i><\/a><i><span> and <\/span><\/i><a target=\"_blank\" href=\"https:\/\/x.com\/NVIDIAworkstatn\" rel=\"noopener\"><i><span>X<\/span><\/i><\/a><i><span>.\u00a0<\/span><\/i><\/p>\n<p><i><span>See <\/span><\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-eu\/about-nvidia\/terms-of-service\/\" rel=\"noopener\"><i><span>notice<\/span><\/i><\/a><i><span> regarding software product information.<\/span><\/i><\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/rtx-ai-garage-gtc-2026-nemoclaw\/<\/p>\n","protected":false},"author":0,"featured_media":4506,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4505"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=4505"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4505\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/4506"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=4505"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=4505"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=4505"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}