{"id":1415,"date":"2021-12-31T16:47:47","date_gmt":"2021-12-31T16:47:47","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2021\/12\/31\/running-and-passing-information-to-a-python-script\/"},"modified":"2021-12-31T16:47:47","modified_gmt":"2021-12-31T16:47:47","slug":"running-and-passing-information-to-a-python-script","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2021\/12\/31\/running-and-passing-information-to-a-python-script\/","title":{"rendered":"Running and Passing Information to a Python Script"},"content":{"rendered":"<div id=\"\">\n<p id=\"last-modified-info\">Last Updated on December 29, 2021<\/p>\n<p>Running your Python scripts is an important step in the development process, because it is in this manner that you\u2019ll get to find out if your code works as you intended it to. It is, also, often the case that we would need to pass information to the Python script for it to function.<\/p>\n<p>In this tutorial, you will discover various ways of running and passing information to a Python script.<\/p>\n<p>After completing this tutorial, you will know:<\/p>\n<ul>\n<li>How to run a Python script using the command-line interface, the Jupyter Notebook or an Integrated Development Environment (IDE).<span class=\"Apple-converted-space\">\u00a0<\/span><\/li>\n<li>How to pass information to a Python script using the\u00a0<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f5a809040126\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-k \">sys<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">argv<\/span><\/span><\/span>\u00a0 command, by hard-coding the input variables in Jupyter Notebook, or through the interactive use of the\u00a0<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f5e991828826\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-k \">input<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-sy\">)<\/span><\/span><\/span>\u00a0 function.<span class=\"Apple-converted-space\">\u00a0<\/span><\/li>\n<\/ul>\n<p>Let\u2019s get started.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<div id=\"attachment_13156\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/12\/invoking_python_cover-scaled.jpg\"><img aria-describedby=\"caption-attachment-13156\" loading=\"lazy\" class=\"wp-image-13156 size-large\" data-cfsrc=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/12\/invoking_python_cover-1024x683.jpg\" alt=\"\" width=\"1024\" height=\"683\"><img decoding=\"async\" aria-describedby=\"caption-attachment-13156\" loading=\"lazy\" class=\"wp-image-13156 size-large\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/12\/invoking_python_cover-1024x683.jpg\" alt=\"\" width=\"1024\" height=\"683\"><\/a><\/p>\n<p id=\"caption-attachment-13156\" class=\"wp-caption-text\">Running and Passing Information to a Python Script<br \/>Photo by <a href=\"https:\/\/unsplash.com\/photos\/QVD3Xht9txA\">Andrea Leopardi<\/a>, some rights reserved.<\/p>\n<\/div>\n<h2><b>Tutorial Overview<\/b><\/h2>\n<p>This tutorial is divided into two parts; they are:<\/p>\n<ul>\n<li>Running a Python Script\n<ul>\n<li>Using a Command-Line Interface<\/li>\n<li>Using Jupyter Notebook<\/li>\n<li>Using an Integrated Development Environment (IDE)<\/li>\n<\/ul>\n<\/li>\n<li>Python Input<\/li>\n<\/ul>\n<h2><b>Running a Python Script:<\/b><\/h2>\n<h3><b>Using a Command-Line Interface<\/b><\/h3>\n<p>The command-line interface is used extensively for running Python code.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>Let\u2019s test a few commands by first opening up a Command Prompt or Terminal window, depending on the operating system that you are working on.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>Typing the\u00a0<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f5f036257342\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-v\">python<\/span><\/span><\/span>\u00a0 command in your command-line interface will initiate a Python interactive session. You will see that a message appears informing you of the Python version that you are using.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<div id=\"urvanov-syntax-highlighter-61ce6289e2f60804871359\" class=\"urvanov-syntax-highlighter-syntax crayon-theme-classic urvanov-syntax-highlighter-font-monaco urvanov-syntax-highlighter-os-pc print-yes notranslate\" data-settings=\" minimize scroll-mouseover disable-anim\">\n<p><textarea class=\"urvanov-syntax-highlighter-plain print-no\" data-settings=\"dblclick\" readonly><br \/>\nPython 3.7.4 (default, Aug 13 2019, 15:17:50)&amp;nbsp;<br \/>\n[Clang 4.0.1 (tags\/RELEASE_401\/final)] :: Anaconda, Inc. on darwin<br \/>\nType &#8220;help&#8221;, &#8220;copyright&#8221;, &#8220;credits&#8221; or &#8220;license&#8221; for more information.<\/textarea><\/p>\n<div class=\"urvanov-syntax-highlighter-main\">\n<table class=\"crayon-table\">\n<tr class=\"urvanov-syntax-highlighter-row\">\n<td class=\"crayon-nums \" data-settings=\"show\">\n<\/td>\n<td class=\"urvanov-syntax-highlighter-code\">\n<div class=\"crayon-pre\">\n<p>Python 3.7.4 (default, Aug 13 2019, 15:17:50)&amp;nbsp;<\/p>\n<p>[Clang 4.0.1 (tags\/RELEASE_401\/final)] :: Anaconda, Inc. on darwin<\/p>\n<p>Type &#8220;help&#8221;, &#8220;copyright&#8221;, &#8220;credits&#8221; or &#8220;license&#8221; for more information.<\/p>\n<\/div>\n<\/td>\n<\/tr>\n<\/table><\/div>\n<\/p><\/div>\n<p>Any statements that you write in your command-line interface during an interactive session will be executed immediately. For example, typing out 2 + 3 returns a value of 5:<\/p>\n<p>Using an interactive session in this manner has its advantages, because you can test out lines of Python code easily and quickly. However, it is not the ideal option if we are more interested in writing lengthier programs, as would be the case if we are developing a machine learning algorithm. The code also disappears once the interactive session is terminated.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>An alternative option would be to run a Python script. Let\u2019s start with a simple example, first.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>In a text editor (such as, <a href=\"https:\/\/notepad-plus-plus.org\/\">Notepad++<\/a>, <a href=\"https:\/\/code.visualstudio.com\/\">Visual Studio Code<\/a> or <a href=\"https:\/\/www.sublimetext.com\/\">Sublime Text<\/a>), type the statement\u00a0<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f63307284494\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-k \">print<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-s\">&#8220;Hello World!&#8221;<\/span><span class=\"crayon-sy\">)<\/span><\/span><\/span>\u00a0 and save the file to <em>test_script.py<\/em>, or any other name of your choice as long as you include a <em>.py<\/em> extension.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>Now head back to your command-line interface and type the\u00a0<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f64867725007\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-v\">python<\/span><\/span><\/span>\u00a0 command, followed by the name of your script file. Before you do so, you might need to change the path to point to the directory that contains the script file. Running the script file should then produce the following output:<\/p>\n<p>Let\u2019s now write a script file that loads a pre-trained Keras model and outputs a prediction for <a href=\"https:\/\/unsplash.com\/photos\/2l0CWTpcChI\">this<\/a> image of a dog. It is often the case that we would also need to pass information to the Python script in the form of command-line <i>arguments<\/i>. For this purpose, we will be using the\u00a0<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f67765681869\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-k \">sys<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">argv<\/span><\/span><\/span>\u00a0 command to pass to the script the image path and the number of top-guesses to return. We could have as many input arguments as the code requires, in which case we would keep on reading the inputs from the argument list. <span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>The script file that we will be running now contains the following code:<\/p>\n<div id=\"urvanov-syntax-highlighter-61ce6289e2f68086870478\" class=\"urvanov-syntax-highlighter-syntax crayon-theme-classic urvanov-syntax-highlighter-font-monaco urvanov-syntax-highlighter-os-pc print-yes notranslate\" data-settings=\" minimize scroll-mouseover disable-anim\">\n<p><textarea class=\"urvanov-syntax-highlighter-plain print-no\" data-settings=\"dblclick\" readonly><br \/>\nimport sys<br \/>\nimport numpy as np<br \/>\nfrom tensorflow.keras.applications import vgg16<br \/>\nfrom tensorflow.keras.applications.vgg16 import preprocess_input, decode_predictions<br \/>\nfrom tensorflow.keras.preprocessing import image<\/p>\n<p># Load the VGG16 model pre-trained on the ImageNet dataset<br \/>\nvgg16_model = vgg16.VGG16(weights=&#8217;imagenet&#8217;)<\/p>\n<p># Read the command-line argument passed to the interpreter when invoking the script<br \/>\nimage_path = sys.argv[1]<br \/>\ntop_guesses = sys.argv[2]<\/p>\n<p># Load the image, resized according to the model target size<br \/>\nimg_resized = image.load_img(image_path, target_size=(224, 224))<\/p>\n<p># Convert the image into an array<br \/>\nimg = image.img_to_array(img_resized) <\/p>\n<p># Add in a dimension<br \/>\nimg = np.expand_dims(img, axis=0) <\/p>\n<p># Scale the pixel intensity values<br \/>\nimg = preprocess_input(img) <\/p>\n<p># Generate a prediction for the test image<br \/>\npred_vgg = vgg16_model.predict(img)<\/p>\n<p># Decode and print the top 3 predictions<br \/>\nprint(&#8216;Prediction:&#8217;, decode_predictions(pred_vgg, top=int(top_guesses)))<\/textarea><\/p>\n<div class=\"urvanov-syntax-highlighter-main\">\n<table class=\"crayon-table\">\n<tr class=\"urvanov-syntax-highlighter-row\">\n<td class=\"crayon-nums \" data-settings=\"show\">\n<div class=\"urvanov-syntax-highlighter-nums-content\">\n<p>1<\/p>\n<p>2<\/p>\n<p>3<\/p>\n<p>4<\/p>\n<p>5<\/p>\n<p>6<\/p>\n<p>7<\/p>\n<p>8<\/p>\n<p>9<\/p>\n<p>10<\/p>\n<p>11<\/p>\n<p>12<\/p>\n<p>13<\/p>\n<p>14<\/p>\n<p>15<\/p>\n<p>16<\/p>\n<p>17<\/p>\n<p>18<\/p>\n<p>19<\/p>\n<p>20<\/p>\n<p>21<\/p>\n<p>22<\/p>\n<p>23<\/p>\n<p>24<\/p>\n<p>25<\/p>\n<p>26<\/p>\n<p>27<\/p>\n<p>28<\/p>\n<p>29<\/p>\n<p>30<\/p>\n<\/div>\n<\/td>\n<td class=\"urvanov-syntax-highlighter-code\">\n<div class=\"crayon-pre\">\n<p><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-k \">sys<\/span><\/p>\n<p><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">numpy <\/span><span class=\"crayon-st\">as<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">np<\/span><\/p>\n<p><span class=\"crayon-st\">from<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">tensorflow<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">keras<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">applications <\/span><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">vgg16<\/span><\/p>\n<p><span class=\"crayon-st\">from<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">tensorflow<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">keras<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">applications<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">vgg16 <\/span><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">preprocess_input<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">decode_predictions<\/span><\/p>\n<p><span class=\"crayon-st\">from<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">tensorflow<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">keras<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">preprocessing <\/span><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-i\">image<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Load the VGG16 model pre-trained on the ImageNet dataset<\/span><\/p>\n<p><span class=\"crayon-v\">vgg16_model<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">vgg16<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">VGG16<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">weights<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-s\">&#8216;imagenet&#8217;<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Read the command-line argument passed to the interpreter when invoking the script<\/span><\/p>\n<p><span class=\"crayon-v\">image_path<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-k \">sys<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">argv<\/span><span class=\"crayon-sy\">[<\/span><span class=\"crayon-cn\">1<\/span><span class=\"crayon-sy\">]<\/span><\/p>\n<p><span class=\"crayon-v\">top_guesses<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-k \">sys<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">argv<\/span><span class=\"crayon-sy\">[<\/span><span class=\"crayon-cn\">2<\/span><span class=\"crayon-sy\">]<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Load the image, resized according to the model target size<\/span><\/p>\n<p><span class=\"crayon-v\">img_resized<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">image<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">load_img<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">image_path<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">target_size<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-cn\">224<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-cn\">224<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Convert the image into an array<\/span><\/p>\n<p><span class=\"crayon-v\">img<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">image<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">img_to_array<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img_resized<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-h\"> <\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Add in a dimension<\/span><\/p>\n<p><span class=\"crayon-v\">img<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">np<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">expand_dims<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">axis<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-cn\">0<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-h\"> <\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Scale the pixel intensity values<\/span><\/p>\n<p><span class=\"crayon-v\">img<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">preprocess_input<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-h\"> <\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Generate a prediction for the test image<\/span><\/p>\n<p><span class=\"crayon-v\">pred_vgg<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">vgg16_model<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">predict<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Decode and print the top 3 predictions<\/span><\/p>\n<p><span class=\"crayon-k \">print<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-s\">&#8216;Prediction:&#8217;<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">decode_predictions<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">pred_vgg<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">top<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-k \">int<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">top_guesses<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<\/div>\n<\/td>\n<\/tr>\n<\/table><\/div>\n<\/p><\/div>\n<p>In the above code, we read the command line arguments using <code>sys.argv[1]<\/code> and <code>sys.argv[2]<\/code> for the first two arguments. We can run the script by making use of the <code>python<\/code> command followed by the name of the script file, and further passing it as arguments the image path (after the image been saved to disk) and the number of top guesses that we would like to predict:<\/p>\n<div id=\"urvanov-syntax-highlighter-61ce6289e2f69532061701\" class=\"urvanov-syntax-highlighter-syntax crayon-theme-classic urvanov-syntax-highlighter-font-monaco urvanov-syntax-highlighter-os-pc print-yes notranslate\" data-settings=\" minimize scroll-mouseover disable-anim\">\n<p><textarea class=\"urvanov-syntax-highlighter-plain print-no\" data-settings=\"dblclick\" readonly><br \/>\npython pretrained_model.py dog.jpg 3<\/textarea><\/p>\n<div class=\"urvanov-syntax-highlighter-main\">\n<table class=\"crayon-table\">\n<tr class=\"urvanov-syntax-highlighter-row\">\n<td class=\"crayon-nums \" data-settings=\"show\">\n<\/td>\n<td class=\"urvanov-syntax-highlighter-code\">\n<div class=\"crayon-pre\">\n<p><span class=\"crayon-e\">python <\/span><span class=\"crayon-v\">pretrained_model<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">py <\/span><span class=\"crayon-v\">dog<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-i\">jpg<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-cn\">3<\/span><\/p>\n<\/div>\n<\/td>\n<\/tr>\n<\/table><\/div>\n<\/p><\/div>\n<p>Where <em>pretrained_model.py<\/em> is the name of the script file, and the <em>dog.jpg<\/em> image has been saved into the same directory that also contains the Python script.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>The generated top three guesses are the following:<\/p>\n<div id=\"urvanov-syntax-highlighter-61ce6289e2f6a086690974\" class=\"urvanov-syntax-highlighter-syntax crayon-theme-classic urvanov-syntax-highlighter-font-monaco urvanov-syntax-highlighter-os-pc print-yes notranslate\" data-settings=\" minimize scroll-mouseover disable-anim\">\n<p><textarea class=\"urvanov-syntax-highlighter-plain print-no\" data-settings=\"dblclick\" readonly><br \/>\nPrediction: [[(&#8216;n02088364&#8217;, &#8216;beagle&#8217;, 0.6751468), (&#8216;n02089867&#8217;, &#8216;Walker_hound&#8217;, 0.1394801), (&#8216;n02089973&#8217;, &#8216;English_foxhound&#8217;, 0.057901423)]]<\/textarea><\/p>\n<div class=\"urvanov-syntax-highlighter-main\">\n<table class=\"crayon-table\">\n<tr class=\"urvanov-syntax-highlighter-row\">\n<td class=\"crayon-nums \" data-settings=\"show\">\n<\/td>\n<td class=\"urvanov-syntax-highlighter-code\">\n<div class=\"crayon-pre\">\n<p>Prediction: [[(&#8216;n02088364&#8217;, &#8216;beagle&#8217;, 0.6751468), (&#8216;n02089867&#8217;, &#8216;Walker_hound&#8217;, 0.1394801), (&#8216;n02089973&#8217;, &#8216;English_foxhound&#8217;, 0.057901423)]]<\/p>\n<\/div>\n<\/td>\n<\/tr>\n<\/table><\/div>\n<\/p><\/div>\n<p>But there can be more in command line. For example, the following command line will run the script in \u201coptimized\u201d mode, which the debugging variable <code>__debug__<\/code> is set to <code>False<\/code> and <code>assert<\/code> statements are skipped:<\/p>\n<p>and the following is to launch the script with a Python module, such as the debugger:<\/p>\n<div id=\"urvanov-syntax-highlighter-61ce6289e2f6c416588123\" class=\"urvanov-syntax-highlighter-syntax crayon-theme-classic urvanov-syntax-highlighter-font-monaco urvanov-syntax-highlighter-os-pc print-yes notranslate\" data-settings=\" minimize scroll-mouseover disable-anim\">\n<p><textarea class=\"urvanov-syntax-highlighter-plain print-no\" data-settings=\"dblclick\" readonly><br \/>\npython -m pdb test_script.py<\/textarea><\/p>\n<div class=\"urvanov-syntax-highlighter-main\">\n<table class=\"crayon-table\">\n<tr class=\"urvanov-syntax-highlighter-row\">\n<td class=\"crayon-nums \" data-settings=\"show\">\n<\/td>\n<td class=\"urvanov-syntax-highlighter-code\">\n<div class=\"crayon-pre\">\n<p><span class=\"crayon-v\">python<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">&#8211;<\/span><span class=\"crayon-i\">m<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-k \">pdb<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">test_script<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">py<\/span><\/p>\n<\/div>\n<\/td>\n<\/tr>\n<\/table><\/div>\n<\/p><\/div>\n<p>We will have another post about the use of the debugger and profilers.<\/p>\n<h2><b>Using Jupyter Notebook<\/b><\/h2>\n<p>Running a Python script from the command-line interface is a straightforward option if your code generates a string output and not much else.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>However, when we are working with images, it is often desirable to generate a visual output too: we might be checking the correctness of any pre-processing that is applied to the input image before feeding it into a neural network, or visualising the result that the neural network produces. The Jupyter Notebook offers an interactive computing environment that can help us achieve this.<\/p>\n<p>One way of running a Python script through the Jupyter Notebook interface is to simply add the code to a \u201ccell\u201d in the notebook. But this means your code stays inside the Jupyter notebook and cannot be accessed elsewhere, such as using the command line as above. Another way is to use the<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f6d264698414\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-v\">run<\/span><\/span><\/span>\u00a0 magic command, prefixed by the\u00a0<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f6e890198830\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-o\">%<\/span><\/span><\/span>\u00a0 character. Try typing the following code into a cell in Jupyter Notebook:<\/p>\n<div id=\"urvanov-syntax-highlighter-61ce6289e2f6f473795801\" class=\"urvanov-syntax-highlighter-syntax crayon-theme-classic urvanov-syntax-highlighter-font-monaco urvanov-syntax-highlighter-os-pc print-yes notranslate\" data-settings=\" minimize scroll-mouseover disable-anim\">\n<p><textarea class=\"urvanov-syntax-highlighter-plain print-no\" data-settings=\"dblclick\" readonly><br \/>\n%run pretrained_model.py dog.jpg 3<\/textarea><\/p>\n<div class=\"urvanov-syntax-highlighter-main\">\n<table class=\"crayon-table\">\n<tr class=\"urvanov-syntax-highlighter-row\">\n<td class=\"crayon-nums \" data-settings=\"show\">\n<\/td>\n<td class=\"urvanov-syntax-highlighter-code\">\n<div class=\"crayon-pre\">\n<p><span class=\"crayon-o\">%<\/span><span class=\"crayon-e\">run <\/span><span class=\"crayon-v\">pretrained_model<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">py <\/span><span class=\"crayon-v\">dog<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-i\">jpg<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-cn\">3<\/span><\/p>\n<\/div>\n<\/td>\n<\/tr>\n<\/table><\/div>\n<\/p><\/div>\n<p>Here, we are again specifying the name of the Python script file, as <em>pretrained_model.py<\/em>, followed by the image path and the number of top guesses as the input arguments. You will see that the top three predictions are printed beneath the cell that produced this result.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>Now, let\u2019s say that we would like to display the input image in order to check that it has been loaded according to the model target size. For this purpose, we will modify the code slightly as follows and save into a new Python script, <em>pretrained_model_image.py<\/em>:<\/p>\n<div id=\"urvanov-syntax-highlighter-61ce6289e2f70310520727\" class=\"urvanov-syntax-highlighter-syntax crayon-theme-classic urvanov-syntax-highlighter-font-monaco urvanov-syntax-highlighter-os-pc print-yes notranslate\" data-settings=\" minimize scroll-mouseover disable-anim\">\n<p><textarea class=\"urvanov-syntax-highlighter-plain print-no\" data-settings=\"dblclick\" readonly><br \/>\nimport sys<br \/>\nimport numpy as np<br \/>\nimport matplotlib.pyplot as plt<br \/>\nfrom tensorflow.keras.applications import vgg16<br \/>\nfrom tensorflow.keras.applications.vgg16 import preprocess_input, decode_predictions<br \/>\nfrom tensorflow.keras.preprocessing import image<\/p>\n<p># Load the VGG16 model pre-trained on the ImageNet dataset<br \/>\nvgg16_model = vgg16.VGG16(weights=&#8217;imagenet&#8217;)<\/p>\n<p># Read the arguments passed to the interpreter when invoking the script<br \/>\nimage_path = sys.argv[1]<br \/>\ntop_guesses = sys.argv[2]<\/p>\n<p># Load the image, resized according to the model target size<br \/>\nimg_resized = image.load_img(image_path, target_size=(224, 224))<\/p>\n<p># Convert the image into an array<br \/>\nimg = image.img_to_array(img_resized)<\/p>\n<p># Display the image to check that it has been correctly resized<br \/>\nplt.imshow(img.astype(np.uint8))<\/p>\n<p># Add in a dimension<br \/>\nimg = np.expand_dims(img, axis=0) <\/p>\n<p># Scale the pixel intensity values<br \/>\nimg = preprocess_input(img) <\/p>\n<p># Generate a prediction for the test image<br \/>\npred_vgg = vgg16_model.predict(img)<\/p>\n<p># Decode and print the top 3 predictions<br \/>\nprint(&#8216;Prediction:&#8217;, decode_predictions(pred_vgg, top=int(top_guesses)))<\/textarea><\/p>\n<div class=\"urvanov-syntax-highlighter-main\">\n<table class=\"crayon-table\">\n<tr class=\"urvanov-syntax-highlighter-row\">\n<td class=\"crayon-nums \" data-settings=\"show\">\n<div class=\"urvanov-syntax-highlighter-nums-content\">\n<p>1<\/p>\n<p>2<\/p>\n<p>3<\/p>\n<p>4<\/p>\n<p>5<\/p>\n<p>6<\/p>\n<p>7<\/p>\n<p>8<\/p>\n<p>9<\/p>\n<p>10<\/p>\n<p>11<\/p>\n<p>12<\/p>\n<p>13<\/p>\n<p>14<\/p>\n<p>15<\/p>\n<p>16<\/p>\n<p>17<\/p>\n<p>18<\/p>\n<p>19<\/p>\n<p>20<\/p>\n<p>21<\/p>\n<p>22<\/p>\n<p>23<\/p>\n<p>24<\/p>\n<p>25<\/p>\n<p>26<\/p>\n<p>27<\/p>\n<p>28<\/p>\n<p>29<\/p>\n<p>30<\/p>\n<p>31<\/p>\n<p>32<\/p>\n<p>33<\/p>\n<p>34<\/p>\n<\/div>\n<\/td>\n<td class=\"urvanov-syntax-highlighter-code\">\n<div class=\"crayon-pre\">\n<p><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-k \">sys<\/span><\/p>\n<p><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">numpy <\/span><span class=\"crayon-st\">as<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">np<\/span><\/p>\n<p><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">matplotlib<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">pyplot <\/span><span class=\"crayon-st\">as<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">plt<\/span><\/p>\n<p><span class=\"crayon-st\">from<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">tensorflow<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">keras<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">applications <\/span><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">vgg16<\/span><\/p>\n<p><span class=\"crayon-st\">from<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">tensorflow<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">keras<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">applications<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">vgg16 <\/span><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">preprocess_input<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">decode_predictions<\/span><\/p>\n<p><span class=\"crayon-st\">from<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">tensorflow<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">keras<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">preprocessing <\/span><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-i\">image<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Load the VGG16 model pre-trained on the ImageNet dataset<\/span><\/p>\n<p><span class=\"crayon-v\">vgg16_model<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">vgg16<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">VGG16<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">weights<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-s\">&#8216;imagenet&#8217;<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Read the arguments passed to the interpreter when invoking the script<\/span><\/p>\n<p><span class=\"crayon-v\">image_path<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-k \">sys<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">argv<\/span><span class=\"crayon-sy\">[<\/span><span class=\"crayon-cn\">1<\/span><span class=\"crayon-sy\">]<\/span><\/p>\n<p><span class=\"crayon-v\">top_guesses<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-k \">sys<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">argv<\/span><span class=\"crayon-sy\">[<\/span><span class=\"crayon-cn\">2<\/span><span class=\"crayon-sy\">]<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Load the image, resized according to the model target size<\/span><\/p>\n<p><span class=\"crayon-v\">img_resized<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">image<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">load_img<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">image_path<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">target_size<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-cn\">224<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-cn\">224<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Convert the image into an array<\/span><\/p>\n<p><span class=\"crayon-v\">img<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">image<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">img_to_array<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img_resized<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Display the image to check that it has been correctly resized<\/span><\/p>\n<p><span class=\"crayon-v\">plt<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">imshow<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">astype<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">np<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">uint8<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Add in a dimension<\/span><\/p>\n<p><span class=\"crayon-v\">img<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">np<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">expand_dims<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">axis<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-cn\">0<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-h\"> <\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Scale the pixel intensity values<\/span><\/p>\n<p><span class=\"crayon-v\">img<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">preprocess_input<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-h\"> <\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Generate a prediction for the test image<\/span><\/p>\n<p><span class=\"crayon-v\">pred_vgg<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">vgg16_model<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">predict<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Decode and print the top 3 predictions<\/span><\/p>\n<p><span class=\"crayon-k \">print<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-s\">&#8216;Prediction:&#8217;<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">decode_predictions<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">pred_vgg<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">top<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-k \">int<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">top_guesses<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<\/div>\n<\/td>\n<\/tr>\n<\/table><\/div>\n<\/p><\/div>\n<p>Running the newly saved Python script through the Jupyter Notebook interface now displays the resized $224 times 224$ pixel image, in addition to printing the top three predictions:<\/p>\n<div id=\"urvanov-syntax-highlighter-61ce6289e2f73199581746\" class=\"urvanov-syntax-highlighter-syntax crayon-theme-classic urvanov-syntax-highlighter-font-monaco urvanov-syntax-highlighter-os-pc print-yes notranslate\" data-settings=\" minimize scroll-mouseover disable-anim\">\n<p><textarea class=\"urvanov-syntax-highlighter-plain print-no\" data-settings=\"dblclick\" readonly><br \/>\n%run pretrained_model_image.py dog.jpg 3<\/textarea><\/p>\n<div class=\"urvanov-syntax-highlighter-main\">\n<table class=\"crayon-table\">\n<tr class=\"urvanov-syntax-highlighter-row\">\n<td class=\"crayon-nums \" data-settings=\"show\">\n<\/td>\n<td class=\"urvanov-syntax-highlighter-code\">\n<div class=\"crayon-pre\">\n<p><span class=\"crayon-o\">%<\/span><span class=\"crayon-e\">run <\/span><span class=\"crayon-v\">pretrained_model_image<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">py <\/span><span class=\"crayon-v\">dog<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-i\">jpg<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-cn\">3<\/span><\/p>\n<\/div>\n<\/td>\n<\/tr>\n<\/table><\/div>\n<\/p><\/div>\n<div id=\"attachment_13154\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/12\/invoking_python_1.png\"><img aria-describedby=\"caption-attachment-13154\" loading=\"lazy\" class=\"wp-image-13154\" data-cfsrc=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/12\/invoking_python_1-1024x559.png\" alt=\"\" width=\"674\" height=\"368\"><img decoding=\"async\" aria-describedby=\"caption-attachment-13154\" loading=\"lazy\" class=\"wp-image-13154\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/12\/invoking_python_1-1024x559.png\" alt=\"\" width=\"674\" height=\"368\"><\/a><\/p>\n<p id=\"caption-attachment-13154\" class=\"wp-caption-text\">Running a Python Script in Jupyter Notebook<\/p>\n<\/div>\n<p>Alternatively, we can trim down the code to the following (and save it to yet another Python script, <em>pretrained_model_inputs.py<\/em>):<\/p>\n<div id=\"urvanov-syntax-highlighter-61ce6289e2f74743491621\" class=\"urvanov-syntax-highlighter-syntax crayon-theme-classic urvanov-syntax-highlighter-font-monaco urvanov-syntax-highlighter-os-pc print-yes notranslate\" data-settings=\" minimize scroll-mouseover disable-anim\">\n<p><textarea class=\"urvanov-syntax-highlighter-plain print-no\" data-settings=\"dblclick\" readonly><br \/>\n# Load the VGG16 model pre-trained on the ImageNet dataset<br \/>\nvgg16_model = vgg16.VGG16(weights=&#8217;imagenet&#8217;)<\/p>\n<p># Load the image, resized according to the model target size<br \/>\nimg_resized = image.load_img(image_path, target_size=(224, 224))<\/p>\n<p># Convert the image into an array<br \/>\nimg = image.img_to_array(img_resized) <\/p>\n<p># Display the image to check that it has been correctly resized<br \/>\nplt.imshow(img.astype(np.uint8))<\/p>\n<p># Add in a dimension<br \/>\nimg = np.expand_dims(img, axis=0) <\/p>\n<p># Scale the pixel intensity values<br \/>\nimg = preprocess_input(img) <\/p>\n<p># Generate a prediction for the test image<br \/>\npred_vgg = vgg16_model.predict(img)<\/p>\n<p># Decode and print the top 3 predictions<br \/>\nprint(&#8216;Prediction:&#8217;, decode_predictions(pred_vgg, top=top_guesses))<\/textarea><\/p>\n<div class=\"urvanov-syntax-highlighter-main\">\n<table class=\"crayon-table\">\n<tr class=\"urvanov-syntax-highlighter-row\">\n<td class=\"crayon-nums \" data-settings=\"show\">\n<div class=\"urvanov-syntax-highlighter-nums-content\">\n<p>1<\/p>\n<p>2<\/p>\n<p>3<\/p>\n<p>4<\/p>\n<p>5<\/p>\n<p>6<\/p>\n<p>7<\/p>\n<p>8<\/p>\n<p>9<\/p>\n<p>10<\/p>\n<p>11<\/p>\n<p>12<\/p>\n<p>13<\/p>\n<p>14<\/p>\n<p>15<\/p>\n<p>16<\/p>\n<p>17<\/p>\n<p>18<\/p>\n<p>19<\/p>\n<p>20<\/p>\n<p>21<\/p>\n<p>22<\/p>\n<p>23<\/p>\n<\/div>\n<\/td>\n<td class=\"urvanov-syntax-highlighter-code\">\n<div class=\"crayon-pre\">\n<p><span class=\"crayon-c\"># Load the VGG16 model pre-trained on the ImageNet dataset<\/span><\/p>\n<p><span class=\"crayon-v\">vgg16_model<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">vgg16<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">VGG16<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">weights<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-s\">&#8216;imagenet&#8217;<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Load the image, resized according to the model target size<\/span><\/p>\n<p><span class=\"crayon-v\">img_resized<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">image<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">load_img<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">image_path<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">target_size<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-cn\">224<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-cn\">224<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Convert the image into an array<\/span><\/p>\n<p><span class=\"crayon-v\">img<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">image<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">img_to_array<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img_resized<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-h\"> <\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Display the image to check that it has been correctly resized<\/span><\/p>\n<p><span class=\"crayon-v\">plt<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">imshow<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">astype<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">np<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">uint8<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Add in a dimension<\/span><\/p>\n<p><span class=\"crayon-v\">img<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">np<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">expand_dims<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">axis<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-cn\">0<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-h\"> <\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Scale the pixel intensity values<\/span><\/p>\n<p><span class=\"crayon-v\">img<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">preprocess_input<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-h\"> <\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Generate a prediction for the test image<\/span><\/p>\n<p><span class=\"crayon-v\">pred_vgg<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">vgg16_model<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">predict<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Decode and print the top 3 predictions<\/span><\/p>\n<p><span class=\"crayon-k \">print<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-s\">&#8216;Prediction:&#8217;<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">decode_predictions<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">pred_vgg<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">top<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-v\">top_guesses<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<\/div>\n<\/td>\n<\/tr>\n<\/table><\/div>\n<\/p><\/div>\n<p>And define the input variables in one of the cells of the Jupyter Notebook itself. Running the Python script in this manner would require that we also make use of the\u00a0<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f75055656209\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-o\">&#8211;<\/span><span class=\"crayon-v\">i<\/span><\/span><\/span>\u00a0 option after the\u00a0<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f76369814752\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-o\">%<\/span><span class=\"crayon-v\">run<\/span><\/span><\/span>\u00a0 magic:<\/p>\n<div id=\"urvanov-syntax-highlighter-61ce6289e2f77471931255\" class=\"urvanov-syntax-highlighter-syntax crayon-theme-classic urvanov-syntax-highlighter-font-monaco urvanov-syntax-highlighter-os-pc print-yes notranslate\" data-settings=\" minimize scroll-mouseover disable-anim\">\n<p><textarea class=\"urvanov-syntax-highlighter-plain print-no\" data-settings=\"dblclick\" readonly><br \/>\n%run -i pretrained_model_inputs.py<\/textarea><\/p>\n<div class=\"urvanov-syntax-highlighter-main\">\n<table class=\"crayon-table\">\n<tr class=\"urvanov-syntax-highlighter-row\">\n<td class=\"crayon-nums \" data-settings=\"show\">\n<\/td>\n<td class=\"urvanov-syntax-highlighter-code\">\n<div class=\"crayon-pre\">\n<p><span class=\"crayon-o\">%<\/span><span class=\"crayon-v\">run<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">&#8211;<\/span><span class=\"crayon-i\">i<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">pretrained_model_inputs<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">py<\/span><\/p>\n<\/div>\n<\/td>\n<\/tr>\n<\/table><\/div>\n<\/p><\/div>\n<div id=\"attachment_13155\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/12\/invoking_python_2.png\"><img aria-describedby=\"caption-attachment-13155\" loading=\"lazy\" class=\"wp-image-13155\" data-cfsrc=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/12\/invoking_python_2-1024x760.png\" alt=\"\" width=\"674\" height=\"500\"><img decoding=\"async\" aria-describedby=\"caption-attachment-13155\" loading=\"lazy\" class=\"wp-image-13155\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/12\/invoking_python_2-1024x760.png\" alt=\"\" width=\"674\" height=\"500\"><\/a><\/p>\n<p id=\"caption-attachment-13155\" class=\"wp-caption-text\">Running a Python Script in Jupyter Notebook<\/p>\n<\/div>\n<p>The advantage in doing so is to gain easier access to variables inside the Python script that can be defined interactively.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>As your code grows, combining the use of a text editor with Jupyter Notebook could provide for a convenient way forward: the text editor can be used to create Python scripts, which store code that can be reused, while the Jupyter Notebook provides an interactive computing environment for easier data exploration.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<h2><b>Using an Integrated Development Environment (IDE)<\/b><\/h2>\n<p>Another option is to run the Python script from an IDE. This requires that a project is created first and the Python script with a <em>.py<\/em> extension is added to it.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>If we had to consider PyCharm or Visual Studio Code as the IDE of choice, this would require that we create a new project, and subsequently choose the version of Python interpreter that we would like to work with. After adding the Python script to the newly created project, this can be run to generate an output. The following is a screenshot of running Visual Studio Code on macOS. Depends on the IDE, there should be an option to run the code with or without the debugger.<\/p>\n<p><img loading=\"lazy\" class=\"size-large wp-image-13160 aligncenter\" data-cfsrc=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/12\/VScode.png\" alt=\"\" width=\"800\" height=\"1024\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"size-large wp-image-13160 aligncenter\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/12\/VScode.png\" alt=\"\" width=\"800\" height=\"1024\"><\/p>\n<h2><b>Python Input<\/b><\/h2>\n<p>We have, so far, considered the options of passing information to the Python script by means of the\u00a0<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f78528042422\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-k \">sys<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">argv<\/span><\/span><\/span>\u00a0 command, or by hard-coding the input variables in Jupyter Notebook before running the script.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>Another option is to take an input from the user through the\u00a0<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f79409407260\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-k \">input<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-sy\">)<\/span><\/span><\/span>\u00a0 function.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>Consider the following code:<\/p>\n<div id=\"urvanov-syntax-highlighter-61ce6289e2f7a695186450\" class=\"urvanov-syntax-highlighter-syntax crayon-theme-classic urvanov-syntax-highlighter-font-monaco urvanov-syntax-highlighter-os-pc print-yes notranslate\" data-settings=\" minimize scroll-mouseover disable-anim\">\n<p><textarea class=\"urvanov-syntax-highlighter-plain print-no\" data-settings=\"dblclick\" readonly><br \/>\nimport numpy as np<br \/>\nimport matplotlib.pyplot as plt<br \/>\nfrom tensorflow.keras.applications import vgg16<br \/>\nfrom tensorflow.keras.applications.vgg16 import preprocess_input, decode_predictions<br \/>\nfrom tensorflow.keras.preprocessing import image<\/p>\n<p># Load the VGG16 model pre-trained on the ImageNet dataset<br \/>\nvgg16_model = vgg16.VGG16(weights=&#8217;imagenet&#8217;)<\/p>\n<p># Ask the user for manual inputs<br \/>\nimage_path = input(&#8220;Enter image path: &#8220;)<br \/>\ntop_guesses = input(&#8220;Enter number of top guesses: &#8220;)<\/p>\n<p># Load the image, resized according to the model target size<br \/>\nimg_resized = image.load_img(image_path, target_size=(224, 224))<\/p>\n<p># Convert the image into an array<br \/>\nimg = image.img_to_array(img_resized)<\/p>\n<p># Add in a dimension<br \/>\nimg = np.expand_dims(img, axis=0) <\/p>\n<p># Scale the pixel intensity values<br \/>\nimg = preprocess_input(img) <\/p>\n<p># Generate a prediction for the test image<br \/>\npred_vgg = vgg16_model.predict(img)<\/p>\n<p># Decode and print the top 3 predictions<br \/>\nprint(&#8216;Prediction:&#8217;, decode_predictions(pred_vgg, top=int(top_guesses)))<\/textarea><\/p>\n<div class=\"urvanov-syntax-highlighter-main\">\n<table class=\"crayon-table\">\n<tr class=\"urvanov-syntax-highlighter-row\">\n<td class=\"crayon-nums \" data-settings=\"show\">\n<div class=\"urvanov-syntax-highlighter-nums-content\">\n<p>1<\/p>\n<p>2<\/p>\n<p>3<\/p>\n<p>4<\/p>\n<p>5<\/p>\n<p>6<\/p>\n<p>7<\/p>\n<p>8<\/p>\n<p>9<\/p>\n<p>10<\/p>\n<p>11<\/p>\n<p>12<\/p>\n<p>13<\/p>\n<p>14<\/p>\n<p>15<\/p>\n<p>16<\/p>\n<p>17<\/p>\n<p>18<\/p>\n<p>19<\/p>\n<p>20<\/p>\n<p>21<\/p>\n<p>22<\/p>\n<p>23<\/p>\n<p>24<\/p>\n<p>25<\/p>\n<p>26<\/p>\n<p>27<\/p>\n<p>28<\/p>\n<p>29<\/p>\n<p>30<\/p>\n<\/div>\n<\/td>\n<td class=\"urvanov-syntax-highlighter-code\">\n<div class=\"crayon-pre\">\n<p><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">numpy <\/span><span class=\"crayon-st\">as<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">np<\/span><\/p>\n<p><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">matplotlib<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">pyplot <\/span><span class=\"crayon-st\">as<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">plt<\/span><\/p>\n<p><span class=\"crayon-st\">from<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">tensorflow<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">keras<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">applications <\/span><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">vgg16<\/span><\/p>\n<p><span class=\"crayon-st\">from<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">tensorflow<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">keras<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">applications<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">vgg16 <\/span><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">preprocess_input<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">decode_predictions<\/span><\/p>\n<p><span class=\"crayon-st\">from<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">tensorflow<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">keras<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">preprocessing <\/span><span class=\"crayon-r\">import<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-i\">image<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Load the VGG16 model pre-trained on the ImageNet dataset<\/span><\/p>\n<p><span class=\"crayon-v\">vgg16_model<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">vgg16<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">VGG16<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">weights<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-s\">&#8216;imagenet&#8217;<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Ask the user for manual inputs<\/span><\/p>\n<p><span class=\"crayon-v\">image_path<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-k \">input<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-s\">&#8220;Enter image path: &#8220;<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p><span class=\"crayon-v\">top_guesses<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-k \">input<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-s\">&#8220;Enter number of top guesses: &#8220;<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Load the image, resized according to the model target size<\/span><\/p>\n<p><span class=\"crayon-v\">img_resized<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">image<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">load_img<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">image_path<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">target_size<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-cn\">224<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-cn\">224<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Convert the image into an array<\/span><\/p>\n<p><span class=\"crayon-v\">img<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">image<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">img_to_array<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img_resized<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Add in a dimension<\/span><\/p>\n<p><span class=\"crayon-v\">img<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">np<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">expand_dims<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">axis<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-cn\">0<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-h\"> <\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Scale the pixel intensity values<\/span><\/p>\n<p><span class=\"crayon-v\">img<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">preprocess_input<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-h\"> <\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Generate a prediction for the test image<\/span><\/p>\n<p><span class=\"crayon-v\">pred_vgg<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">vgg16_model<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-e\">predict<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">img<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<p>\u00a0<\/p>\n<p><span class=\"crayon-c\"># Decode and print the top 3 predictions<\/span><\/p>\n<p><span class=\"crayon-k \">print<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-s\">&#8216;Prediction:&#8217;<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-e\">decode_predictions<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">pred_vgg<\/span><span class=\"crayon-sy\">,<\/span><span class=\"crayon-h\"> <\/span><span class=\"crayon-v\">top<\/span><span class=\"crayon-o\">=<\/span><span class=\"crayon-k \">int<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-v\">top_guesses<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><span class=\"crayon-sy\">)<\/span><\/p>\n<\/div>\n<\/td>\n<\/tr>\n<\/table><\/div>\n<\/p><\/div>\n<p>Here, the user is prompted to manually enter the image path (the image has been saved into the same directory that also contains the Python script and, hence, specifying the image name is sufficient), and the number of top guesses to generate. Both input values are of type string, however the number of top guesses is later casted to an integer when this is used.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>No matter if this code is run in the command-line interface, in Jupyter Notebook or a Python IDE, it will prompt the user for the required inputs, and subsequently generate the number of predictions that the user asks for.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<h2><b>Further Reading<\/b><\/h2>\n<p>This section provides more resources on the topic if you are looking to go deeper.<\/p>\n<h3><b>Books<\/b><\/h3>\n<h2><b>Summary<\/b><\/h2>\n<p>In this tutorial, you discovered various ways of running and passing information to a Python script.<\/p>\n<p>Specifically, you learned:<\/p>\n<ul>\n<li>How to run a Python script using the command-line interface, the Jupyter Notebook or an Integrated Development Environment (IDE).<span class=\"Apple-converted-space\">\u00a0<\/span><\/li>\n<li>How to pass information to a Python script using the<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f7b949651669\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-k \">sys<\/span><span class=\"crayon-sy\">.<\/span><span class=\"crayon-v\">argv<\/span><\/span><\/span>\u00a0 command, by hard-coding the input variables in Jupyter Notebook, or through the interactive use of the<br \/>\n\t\t\t<span id=\"urvanov-syntax-highlighter-61ce6289e2f7c126526553\" class=\"urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-classic crayon-theme-classic-inline urvanov-syntax-highlighter-font-monaco\"><span class=\"crayon-pre urvanov-syntax-highlighter-code\"><span class=\"crayon-k \">input<\/span><span class=\"crayon-sy\">(<\/span><span class=\"crayon-sy\">)<\/span><\/span><\/span>\u00a0 function.<\/li>\n<\/ul>\n<p>Do you have any questions?<\/p>\n<p>Ask your questions in the comments below and I will do my best to answer.<\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/machinelearningmastery.com\/running-and-passing-information-to-a-python-script\/<\/p>\n","protected":false},"author":0,"featured_media":1416,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1415"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=1415"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1415\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/1416"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=1415"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=1415"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=1415"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}