{"id":3181,"date":"2023-09-25T15:47:57","date_gmt":"2023-09-25T15:47:57","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2023\/09\/25\/six-steps-toward-ai-security\/"},"modified":"2023-09-25T15:47:57","modified_gmt":"2023-09-25T15:47:57","slug":"six-steps-toward-ai-security","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2023\/09\/25\/six-steps-toward-ai-security\/","title":{"rendered":"Six Steps Toward AI Security"},"content":{"rendered":"<div data-url=\"https:\/\/blogs.nvidia.com\/blog\/2023\/09\/25\/ai-security-steps\/\" data-title=\"Six Steps Toward AI Security\" data-hashtags=\"\">\n<p>In the wake of ChatGPT, every company is trying to figure out its AI strategy, work that quickly raises the question: What about security?<\/p>\n<p>Some may feel overwhelmed at the prospect of securing new technology. The good news is policies and practices in place today provide excellent starting points.<\/p>\n<p>Indeed, the way forward lies in extending the existing foundations of enterprise and cloud security. It\u2019s a journey that can be summarized in six steps:<\/p>\n<ul>\n<li>Expand analysis of the threats<\/li>\n<li>Broaden response mechanisms<\/li>\n<li>Secure the data supply chain<\/li>\n<li>Use AI to scale efforts<\/li>\n<li>Be transparent<\/li>\n<li>Create continuous improvements<\/li>\n<\/ul>\n<figure id=\"attachment_67103\" aria-describedby=\"caption-attachment-67103\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/09\/Scaling-AI-security-NU-scaled.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/09\/Scaling-AI-security-NU-672x408.jpg\" alt=\"Chart on scaling AI security\" width=\"672\" height=\"408\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-67103\" class=\"wp-caption-text\">AI security builds on protections enterprises already rely on.<\/figcaption><\/figure>\n<h2><b>Take in the Expanded Horizon<\/b><\/h2>\n<p>The first step is to get familiar with the new landscape.<\/p>\n<p>Security now needs to cover the AI development lifecycle. This includes new attack surfaces like training data, models and the people and processes using them.<\/p>\n<p>Extrapolate from the known types of threats to identify and anticipate emerging ones. For instance, an attacker might try to alter the behavior of an AI model by accessing data while it\u2019s training the model on a cloud service.<\/p>\n<p>The security researchers and red teams who probed for vulnerabilities in the past will be great resources again. They\u2019ll need access to AI systems and data to identify and act on new threats as well as help build solid working relationships with data science staff.<\/p>\n<h2><b>Broaden Defenses<\/b><\/h2>\n<p>Once a picture of the threats is clear, define ways to defend against them.<\/p>\n<p>Monitor AI model performance closely. Assume it will drift, opening new attack surfaces, just as it can be assumed that traditional security defenses will be breached.<\/p>\n<p>Also build on the PSIRT (product security incident response team) practices that should already be in place.<\/p>\n<p>For example, NVIDIA released <a href=\"https:\/\/www.nvidia.com\/en-us\/security\/psirt-policies\/\">product security policies<\/a> that encompass its AI portfolio. Several organizations \u2014 including the <a href=\"https:\/\/llmtop10.com\/\">Open Worldwide Application Security Project<\/a> \u2014 have released AI-tailored implementations of key security elements such as the common vulnerability enumeration method used to identify traditional IT threats.<\/p>\n<p>Adapt and apply to AI models and workflows traditional defenses like:<\/p>\n<ul>\n<li>Keeping network control and data planes separate<\/li>\n<li>Removing any unsafe or personal identifying data<\/li>\n<li>Using <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/06\/07\/what-is-zero-trust\">zero-trust security<\/a> and authentication<\/li>\n<li>Defining appropriate event logs, alerts and tests<\/li>\n<li>Setting flow controls where appropriate<\/li>\n<\/ul>\n<h2><b>Extend Existing Safeguards<\/b><\/h2>\n<p>Protect the datasets used to train AI models. They\u2019re valuable and vulnerable.<\/p>\n<p>Once again, enterprises can leverage existing practices. Create secure data supply chains, similar to those created to secure channels for software. It\u2019s important to establish access control for training data, just like other internal data is secured.<\/p>\n<p>Some gaps may need to be filled. Today, security specialists know how to use hash files of applications to ensure no one has altered their code. That process may be challenging to scale for petabyte-sized datasets used for AI training.<\/p>\n<p>The good news is researchers see the need, and they\u2019re working on tools to address it.<\/p>\n<h2><b>Scale Security With AI<\/b><\/h2>\n<p>AI is not only a new attack area to defend, it\u2019s also a new and powerful security tool.<\/p>\n<p>Machine learning models can detect subtle changes no human can see in mountains of network traffic. That makes AI an ideal technology to prevent many of the most widely used attacks, like identity theft, phishing, malware and ransomware.<\/p>\n<p><a href=\"https:\/\/developer.nvidia.com\/morpheus-cybersecurity\">NVIDIA Morpheus<\/a>, a cybersecurity framework, can build AI applications that create, read and update digital fingerprints that scan for many kinds of threats. In addition, generative AI and Morpheus can enable <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-morpheus-helps-defend-against-spear-phishing-with-generative-ai\/\">new ways to detect spear phishing attempts<\/a>.<\/p>\n<figure id=\"attachment_67106\" aria-describedby=\"caption-attachment-67106\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/09\/Security-Use-Cases-NU.jpg\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2023\/09\/Security-Use-Cases-NU-672x383.jpg\" alt=\"Chart on AI security use cases\" width=\"672\" height=\"383\"><\/p>\n<p><\/a><figcaption id=\"caption-attachment-67106\" class=\"wp-caption-text\">Machine learning is a powerful tool that spans many use cases in security.<\/figcaption><\/figure>\n<h2><b>Security Loves Clarity<\/b><\/h2>\n<p>Transparency is a key component of any security strategy. Let customers know about any new AI security policies and practices that have been put in place.<\/p>\n<p>For example, NVIDIA publishes details about the AI models in <a href=\"https:\/\/www.nvidia.com\/en-us\/gpu-cloud\/\">NGC<\/a>, its hub for accelerated software. Called <a href=\"https:\/\/developer.nvidia.com\/blog\/enhancing-ai-transparency-and-ethical-considerations-with-model-card\/\">model cards<\/a>, they act like truth-in-lending statements, describing AIs, the data they were trained on and any constraints for their use.<\/p>\n<p>NVIDIA uses an expanded set of fields in its model cards, so users are clear about the history and limits of a neural network before putting it into production. That helps advance security, establish trust and ensure models are robust.<\/p>\n<h2><b>Define Journeys, Not Destinations<\/b><\/h2>\n<p>These six steps are just the start of a journey. Processes and policies like these need to evolve.<\/p>\n<p>The emerging practice of <a href=\"https:\/\/blogs.nvidia.com\/blog\/2023\/03\/01\/what-is-confidential-computing\/\">confidential computing<\/a>, for instance, is extending security across cloud services where AI models are often trained and run in production.<\/p>\n<p>The industry is already beginning to see basic versions of code scanners for AI models. They\u2019re a sign of what\u2019s to come. Teams need to keep an eye on the horizon for best practices and tools as they arrive.<\/p>\n<p>Along the way, the community needs to share what it learns. An excellent example of that occurred at the recent <a href=\"https:\/\/blogs.nvidia.com\/blog\/2023\/08\/10\/nvidia-generative-red-team-challenge\/\">Generative Red Team Challenge<\/a>.<\/p>\n<p>In the end, it\u2019s about creating a collective defense. We\u2019re all making this journey to AI security together, one step at a time.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/2023\/09\/25\/ai-security-steps\/<\/p>\n","protected":false},"author":0,"featured_media":3182,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3181"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3181"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3181\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3182"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3181"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3181"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3181"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}