{"id":4393,"date":"2025-12-15T15:41:36","date_gmt":"2025-12-15T15:41:36","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2025\/12\/15\/how-to-fine-tune-an-llm-on-nvidia-gpus-with-unsloth\/"},"modified":"2025-12-15T15:41:36","modified_gmt":"2025-12-15T15:41:36","slug":"how-to-fine-tune-an-llm-on-nvidia-gpus-with-unsloth","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2025\/12\/15\/how-to-fine-tune-an-llm-on-nvidia-gpus-with-unsloth\/","title":{"rendered":"How to Fine-Tune an LLM on NVIDIA GPUs With Unsloth"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p>Modern workflows showcase the endless possibilities of generative and agentic AI on PCs.<\/p>\n<p>Of many, some examples include tuning a chatbot to handle product-support questions or building a personal assistant for managing one\u2019s schedule. A challenge remains, however, in getting a small language model to respond consistently with high accuracy for specialized agentic tasks.<\/p>\n<p>That\u2019s where fine-tuning comes in.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/unsloth.ai\/\" rel=\"noopener\">Unsloth<\/a>, one of the world\u2019s most widely used open-source frameworks for fine-tuning LLMs, provides an approachable way to customize models. It\u2019s optimized for efficient, low-memory training on NVIDIA GPUs \u2014 from <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/laptops\/50-series\/\" rel=\"noopener\">GeForce RTX desktops and laptops<\/a> to <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/products\/workstations\/\" rel=\"noopener\">RTX PRO workstations<\/a> and <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/products\/workstations\/dgx-spark\/\" rel=\"noopener\">DGX Spark<\/a>, the world\u2019s smallest AI supercomputer.<\/p>\n<p>Another powerful starting point for fine-tuning is the just-announced <a target=\"_blank\" href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-debuts-nemotron-3-family-of-open-models\" rel=\"noopener\">NVIDIA Nemotron 3<\/a> family of open models, data and libraries. Nemotron 3 introduces the most efficient family of open models, ideal for agentic AI fine-tuning.<\/p>\n<h2><b>Teaching AI New Tricks\u00a0<\/b><\/h2>\n<p>Fine-tuning is like giving an AI model a focused training session. With examples tied to a specific topic or workflow, the model improves its accuracy by learning new patterns and adapting to the task at hand.<\/p>\n<p>Choosing a fine-tuning method for a model depends on how much of the original model the developer wants to adjust. Based on their goals, developers can use one of three main fine-tuning methods:<\/p>\n<p><b>Parameter-efficient fine-tuning (such as LoRA or QLoRA)<\/b>:<\/p>\n<ul>\n<li>How it works: Updates only a small portion of the model for faster, lower-cost training. It\u2019s a smarter and efficient way to enhance a model without altering it drastically.<\/li>\n<li>Target use case: Useful across nearly all scenarios where full fine-tuning would traditionally be applied \u2014 including adding domain knowledge, improving coding accuracy, adapting the model for legal or scientific tasks, refining reasoning, or aligning tone and behavior.<\/li>\n<li>Requirements: Small- to medium-sized dataset (100-1,000 prompt-sample pairs).<\/li>\n<\/ul>\n<p><b>Full fine-tuning<\/b>:<\/p>\n<ul>\n<li>How it works: Updates all of the model\u2019s parameters \u2014 useful for teaching the model to follow specific formats or styles.<\/li>\n<li>Target use case: Advanced use cases, such as building AI agents and chatbots that must provide assistance about a specific topic, stay within a certain set of guardrails and respond in a particular manner.<\/li>\n<li>Requirements: Large dataset (1,000+ prompt-sample pairs).<\/li>\n<\/ul>\n<p><b>Reinforcement learning<\/b>:<\/p>\n<ul>\n<li>How it works: Adjusts the behavior of the model using feedback or preference signals. The model learns by interacting with its environment and uses the feedback to improve itself over time. This is a complex, advanced technique that interweaves training and inference \u2014 and can be used in tandem with parameter-efficient fine-tuning and full fine-tuning techniques. See <a target=\"_blank\" href=\"https:\/\/docs.unsloth.ai\/get-started\/reinforcement-learning-rl-guide\" rel=\"noopener\">Unsloth\u2019s Reinforcement Learning Guide<\/a> for details.<\/li>\n<li>Target use case: Improving the accuracy of a model in a particular domain \u2014 such as law or medicine \u2014 or building autonomous agents that can orchestrate actions on a user\u2019s behalf.<\/li>\n<li>Requirements:\u00a0 A process that contains an action model, a reward model and an environment for the model to learn from.<\/li>\n<\/ul>\n<p>Another factor to consider is the VRAM required per each method. The chart below provides an overview of the requirements to run each type of fine-tuning method on Unsloth.<\/p>\n<figure id=\"attachment_88327\" aria-describedby=\"caption-attachment-88327\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2025\/12\/Fine-tuning-requirements-on-Unsloth.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-88327\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2025\/12\/Fine-tuning-requirements-on-Unsloth.jpg\" alt=\"\" width=\"923\" height=\"712\"><\/a><figcaption id=\"caption-attachment-88327\" class=\"wp-caption-text\">Fine-tuning requirements on Unsloth.<\/figcaption><\/figure>\n<h2><b>Unsloth: A Fast Path to Fine-Tuning on NVIDIA GPUs\u00a0<\/b><\/h2>\n<p>LLM fine-tuning is a memory- and compute-intensive workload that involves billions of matrix multiplications to update model weights at every training step. This type of heavy parallel workload requires the power of NVIDIA GPUs to complete the process quickly and efficiently.<\/p>\n<p>Unsloth shines at this workload, translating complex mathematical operations into efficient, custom GPU kernels to accelerate AI training.<\/p>\n<p>Unsloth helps boost the performance of the Hugging Face transformers library by 2.5x on NVIDIA GPUs. These GPU-specific optimizations, combined with Unsloth\u2019s ease of use, make fine-tuning accessible to a broader community of AI enthusiasts and developers.<\/p>\n<p>The framework is built and optimized for NVIDIA hardware \u2014 from GeForce RTX laptops to RTX PRO workstations and DGX Spark \u2014 providing peak performance while reducing VRAM consumption.<\/p>\n<p>Unsloth provides helpful guides on how to get started and manage different LLM configurations, hyperparameters and options, along with example notebooks and step-by-step workflows.<\/p>\n<p>Check out some of these Unsloth guides:<\/p>\n<p>Learn how to <a target=\"_blank\" href=\"https:\/\/build.nvidia.com\/spark\/unsloth\" rel=\"noopener\">install Unsloth on NVIDIA DGX Spark<\/a>. Read the <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/train-an-llm-on-an-nvidia-blackwell-desktop-with-unsloth-and-scale-it\/\" rel=\"noopener\">NVIDIA technical blog<\/a> for a deep dive of fine-tuning and reinforcement learning on the NVIDIA Blackwell platform.<\/p>\n<p>For a hands-on local fine-tuning walkthrough, watch <a target=\"_blank\" href=\"https:\/\/www.youtube.com\/@matthew_berman\" rel=\"noopener\">Matthew Berman<\/a> showing reinforcement learning running on a NVIDIA GeForce RTX 5090 using Unsloth in the video below.<\/p>\n<\/p>\n<h2><b>Available Now: NVIDIA Nemotron 3 Family of Open Models<\/b><\/h2>\n<p>The new Nemotron 3 family of open models \u2014 in Nano, Super, and Ultra sizes \u2014 built on a new hybrid latent Mixture-of-Experts (MoE) architecture, introduces the most efficient family of open models with leading accuracy, ideal for building agentic AI applications.<\/p>\n<p>Nemotron 3 Nano 30B-A3B, available now, is the most compute-efficient model in the lineup. It\u2019s optimized for tasks such as software debugging, content summarization, AI assistant workflows and information retrieval at low inference costs. Its hybrid MoE design delivers:<\/p>\n<ul>\n<li>Up to 60% fewer reasoning tokens, significantly reducing inference cost.<\/li>\n<li>A 1 million-token context window, allowing the model to retain far more information for long, multistep tasks.<\/li>\n<\/ul>\n<p>Nemotron 3 Super is a high-accuracy reasoning model for multi-agent applications, while\u00a0Nemotron 3 Ultra is for complex AI applications. Both are expected to be available in the first half of 2026.<\/p>\n<p>NVIDIA also released today an open collection of training datasets and state-of-the-art reinforcement learning libraries. Nemotron 3 Nano fine-tuning is available on Unsloth.<\/p>\n<p>Download Nemotron 3 Nano now from <a target=\"_blank\" href=\"https:\/\/huggingface.co\/nvidia\/NVIDIA-Nemotron-3-Nano-30B-A3B-FP8\" rel=\"noopener\">Hugging Face<\/a>, or experiment with it through Llama.cpp and LM Studio.<\/p>\n<h2><b>DGX Spark: A Compact AI Powerhouse<\/b><\/h2>\n<p>DGX Spark enables local fine-tuning and brings incredible AI performance in a compact, desktop supercomputer, giving developers access to more memory than a typical PC.<\/p>\n<p>Built on the NVIDIA Grace Blackwell architecture, DGX Spark delivers up to a petaflop of FP4 AI performance and includes 128GB of unified CPU-GPU memory, giving developers enough headroom to run larger models, longer context windows and more demanding training workloads locally.<\/p>\n<p>For fine-tuning, DGX Spark enables:<\/p>\n<ul>\n<li><b>Larger model sizes.<\/b> Models with more than 30 billion parameters often exceed the VRAM capacity of consumer GPUs but fit comfortably within DGX Spark\u2019s unified memory.<\/li>\n<li><b>More advanced techniques. <\/b>Full fine-tuning and reinforcement-learning-based workflows \u2014 which demand more memory and higher throughput \u2014 run significantly faster on DGX Spark.<\/li>\n<li><b>Local control without cloud queues.<\/b> Developers can run compute-heavy tasks locally instead of waiting for cloud instances or managing multiple environments.<\/li>\n<\/ul>\n<p>DGX Spark\u2019s strengths go beyond LLMs. High-resolution diffusion models, for example, often require more memory than a typical desktop can provide. With FP4 support and large unified memory, DGX Spark can generate 1,000 images in just a few seconds and sustain higher throughput for creative or multimodal pipelines.<\/p>\n<p>The table below shows performance for fine-tuning the Llama family of models on DGX Spark.<\/p>\n<figure id=\"attachment_88330\" aria-describedby=\"caption-attachment-88330\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2025\/12\/Performance-for-fine-tuning-Llama-family-of-models-on-DGX-Spark.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-88330\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2025\/12\/Performance-for-fine-tuning-Llama-family-of-models-on-DGX-Spark.jpg\" alt=\"\" width=\"923\" height=\"383\"><\/a><figcaption id=\"caption-attachment-88330\" class=\"wp-caption-text\">Performance for fine-tuning Llama family of models on DGX Spark.<\/figcaption><\/figure>\n<p>As fine-tuning workflows advance, the new Nemotron 3 family of open models offer scalable reasoning and long-context performance optimized for RTX systems and DGX Spark.<\/p>\n<p>Learn more about how <a target=\"_blank\" href=\"https:\/\/developer.nvidia.com\/blog\/how-nvidia-dgx-sparks-performance-enables-intensive-ai-tasks\/\" rel=\"noopener\">DGX Spark enables intensive AI tasks<\/a>.<\/p>\n<h2><b>#ICYMI \u2014 The Latest Advancements in NVIDIA RTX AI PCs<\/b><\/h2>\n<p>???? <b>FLUX.2 Image-Generation Models Now Released, Optimized for NVIDIA RTX GPUs<\/b><\/p>\n<p>The new models from Black Forest Labs are available in FP8 quantizations that reduce VRAM and increase performance by 40%.<\/p>\n<p>\u2728 <b>Nexa.ai Expands Local AI on RTX PCs With Hyperlink for Agentic Search<\/b><\/p>\n<p>The new on-device search agent delivers 3x faster retrieval-augmented generation indexing and 2x faster LLM inference, indexing a dense 1GB folder from about 15 minutes to just four to five minutes. Plus, DeepSeek OCR now runs locally in GGUF via NexaSDK, offering plug-and-play parsing of charts, formulas and multilingual PDFs on RTX GPUs.<\/p>\n<p><b>????Mistral AI Unveils New Model Family Optimized for NVIDIA GPUs<\/b><\/p>\n<p>The new Mistral 3 models are optimized from cloud to edge and available for fast, local experimentation through Ollama and Llama.cpp.<\/p>\n<p><b>????Blender 5.0 Lands With HDR Color and Major Performance Gains<\/b><\/p>\n<p>The release adds ACES 2.0 wide-gamut\/HDR color, NVIDIA DLSS for up to 5x faster hair and fur rendering, better handling of massive geometry, and motion blur for Grease Pencil.<\/p>\n<p><i>Plug in to NVIDIA AI PC on <\/i><a target=\"_blank\" href=\"https:\/\/www.facebook.com\/NVIDIA.AI.PC\/\" rel=\"noopener\"><i>Facebook<\/i><\/a><i>, <\/i><a target=\"_blank\" href=\"https:\/\/www.instagram.com\/nvidia.ai.pc\/\" rel=\"noopener\"><i>Instagram<\/i><\/a><i>, <\/i><a target=\"_blank\" href=\"https:\/\/www.tiktok.com\/@nvidia_ai_pc\" rel=\"noopener\"><i>TikTok<\/i><\/a><i> and <\/i><a target=\"_blank\" href=\"https:\/\/x.com\/NVIDIA_AI_PC\" rel=\"noopener\"><i>X<\/i><\/a><i> \u2014 and stay informed by subscribing to the <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai-on-rtx\/?modal=subscribe-ai\" rel=\"noopener\"><i>RTX AI PC newsletter<\/i><\/a><i>. Follow NVIDIA Workstation on <\/i><a target=\"_blank\" href=\"https:\/\/www.linkedin.com\/showcase\/3761136\/\" rel=\"noopener\"><i>LinkedIn<\/i><\/a><i> and <\/i><a target=\"_blank\" href=\"https:\/\/x.com\/NVIDIAworkstatn\" rel=\"noopener\"><i>X<\/i><\/a><i>.\u00a0<\/i><\/p>\n<p><i>See <\/i><a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-eu\/about-nvidia\/terms-of-service\/\" rel=\"noopener\"><i>notice<\/i><\/a><i> regarding software product information.<\/i><\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/rtx-ai-garage-fine-tuning-unsloth-dgx-spark\/<\/p>\n","protected":false},"author":0,"featured_media":4394,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4393"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=4393"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4393\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/4394"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=4393"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=4393"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=4393"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}