{"id":1297,"date":"2021-12-02T08:29:44","date_gmt":"2021-12-02T08:29:44","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2021\/12\/02\/building-a-simple-image-classifier-on-the-bigml-dashboard\/"},"modified":"2021-12-02T08:29:44","modified_gmt":"2021-12-02T08:29:44","slug":"building-a-simple-image-classifier-on-the-bigml-dashboard","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2021\/12\/02\/building-a-simple-image-classifier-on-the-bigml-dashboard\/","title":{"rendered":"Building a Simple Image Classifier on the BigML Dashboard"},"content":{"rendered":"<div>\n<p>BigML\u2019s <a rel=\"noreferrer noopener\" href=\"https:\/\/bigml.com\/releases\/image-processing\" target=\"_blank\"><strong>upcoming release on Wednesday, December 15, 2021<\/strong><\/a>, will be presenting a new set of <strong><a rel=\"noreferrer noopener\" href=\"https:\/\/bigml.com\/image-processing\/\" target=\"_blank\">Image Processing<\/a><\/strong> resources to the BigML platform. In this post, we show you how to build a simple image classifier on the BigML Dashboard. Let\u2019s start!<\/p>\n<p>Image classification is a supervised learning technique for images. Image classification models are trained to identify various classes of images and have a tremendous amount of applications as touched on in our prior posts. As such, BigML introduces image data support with the latest Image Processing release. In this post, a simple application of image classification is built from scratch, which shows how image classification is achieved on the BigML Dashboard, with ease, speed, and accuracy.<\/p>\n<p>When I walk in my neighborhood I see a lot of beautiful flowers \u2014 many neighbors enjoy gardening. Lilies are especially popular. With large and colorful blooms, lilies are prominent in any front yard. But recently I was told some of the \u201clilies\u201d I saw were actually daylilies, not lilies. I\u2019m not a flower person, let alone a botanist, so it\u2019s beyond my expertise to know which are which.\u00a0<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" data-attachment-id=\"30190\" data-permalink=\"https:\/\/blog.bigml.com\/bigml_image_classification-1\/\" data-orig-file=\"https:\/\/littleml.files.wordpress.com\/2021\/12\/bigml_image_classification-1.jpg\" data-orig-size=\"1200,630\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"bigml_image_classification-1\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/littleml.files.wordpress.com\/2021\/12\/bigml_image_classification-1.jpg?w=300\" data-large-file=\"https:\/\/littleml.files.wordpress.com\/2021\/12\/bigml_image_classification-1.jpg?w=810\" src=\"https:\/\/littleml.files.wordpress.com\/2021\/12\/bigml_image_classification-1.jpg?w=1024\" alt=\"\" class=\"wp-image-30190\"><\/figure>\n<p>I decided to build an image classifier using BigML to help us identify whether a flower is a lily or daylily. This way, we don\u2019t have to understand difficult technical terms, e.g. petals vs. sepals. Plus, this author is a firm believer that \u201ca picture is worth a thousand words!\u201d<\/p>\n<h2 id=\"preparing-the-data\">Preparing the Data<\/h2>\n<p>I went on the Internet, found and downloaded pictures of lilies and daylilies, 108 of each.<\/p>\n<p>First, you need to label the pictures because image classification needs labels to build models. BigML provides many flexible ways to <a href=\"https:\/\/blog.bigml.com\/2021\/11\/29\/the-many-ways-of-labeling-images-on-the-bigml-platform\/\"><strong>label your images<\/strong><\/a>. The most straightforward way is to organize the images by folders, with the folder names being the labels or the classes.\u00a0<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/tj3B0nfoYWskXXVFUTFem8DkLIYwslEeA0BMbgw9x7l73gXvjvHJ0I_DpNbIEFxfwZRc_Ew1snCA7wgBv0oFOlPV-k0D5gzr7vNTW69TAOCF9RINTpokWSJ828YCw2TP1_NGuk_w\" alt=\"\"><\/figure>\n<p>As seen above, you can put the pictures into two folders. All daylily pictures are in the \u201cdaylily\u201d, and all lily ones are in the \u201clily\u201d folder. With this structure, the folder names will become the labels of the images respectively.<\/p>\n<p>Now, select both folders and compress them into a zip file. Or if on the command-line, issue such a command:<\/p>\n<pre class=\"wp-block-preformatted\">zip -r lily-or-daylily.zip lily daylily<\/pre>\n<h2 id=\"uploading-the-data\">Uploading the Data<\/h2>\n<p>You can drag and drop the zip file to the BigML Dashboard for uploading. Alternatively, if your data is on the cloud, you can perform a remote upload by using its URL.\u00a0<\/p>\n<p>Once the zip file is uploaded, an image <a href=\"https:\/\/blog.bigml.com\/2021\/11\/24\/composite-sources-in-bigml\/\"><strong>composite source<\/strong><\/a> is created:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/oCc44LUbAZhng_nTgwS4tbEcVHnb8W7AohndsNdrqLNE-ay5t2hpBG1XHkOf-fQzfQh85JhxZngmAeFrX1rmQ5rdycilKaccqdgckoEKioUpOxnBrcaUZwGdWMsQx5OsemcX9PGA\" alt=\"\"><\/figure>\n<p>An image composite source is a collection of image sources. Clicking on the composite source from the source list view above, I get three views of it, which is selectable by clicking on the three tabs on the left under the \u201cFORMAT\u201d heading.\u00a0 The default view is the \u201cFields\u201d view, which displays the fields of the composite source:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/cWwcjLK67uIsFkHw9AlMJbs7dUbNDeWlUbpfyN1Uay6VA05T_yuCK66RhW6yj2TxMWuDnmqgYxZaZoIgIXa5X0SV3pdO_hiEnenwqM2-D-1dcEiWhVhYuFVQnBg0CidPwX0cp30v\" alt=\"\"><\/figure>\n<p>As expected, one of the fields is \u201clabel\u201d, whose values were taken from the folder names in the data.<\/p>\n<p>The \u201cSources\u201d view lists all the component sources of the composite, that is, all the image sources:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/PvCBJBVZFnpqHhq13IW1W0KT-lS_DkEz8b8XBjSC8uELIl1Y59CaG3hYKe5A-Qz5Bc5h7sYvUdlWpIjEPYLA6YDRpiQgHp5FrZES89nLc2ICIOjUYR_GSX0Ok79NLJt_xAodiVVe\" alt=\"\"><\/figure>\n<p>You can click an individual source to view its image and related details:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/nis_ywdGgS0kbC8hxzwhRkcRe-K5vGMyn4OgMPs_6g-7p_uLdORvuk_roISC7Z7WiBRki2W-gz60jFli9yVHbm_vDvAfR8bCMIrWto6Aa9nQJt6Wn4TDVWfp2Hfj6yfsga7MajkY\" alt=\"\"><\/figure>\n<p>In the \u201cImages\u201d view, you can see all the images and their labels:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/lO64wedS2dF6azhHXnad5YjZtIpoWC0xTlQRRmUc2zQYyoqnC1UQ7AZa8ICU6IyhlIsaCG4MbrOkDtPJULAd2pj_7GzF4U7s1QM7VBe-Io6XrMDBagZsWjPyFBHvZ19F_1M_7F31\" alt=\"\"><\/figure>\n<p>In this view, you can also select images and correct their labels.<\/p>\n<p>When an image composite source is created, BigML analyzes the images and automatically generates a set of numeric features for each image. Those features appear as added fields in the composite source. You can configure different sets of image features. Some capture low-level features such as edges and colors while the pre-trained CNNs capture more sophisticated features. In addition to training a deepnet as an image classifier, we will use one of the pre-trained CNNs to create a different image classifier.<\/p>\n<p>Clone the image composite, which creates a new image composite as a copy of itself:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/TgF8tcClsu28YRKAQP2w2POsLTs0TzWpa6jn2_cEB3OLP1hNMxgMsH-aHQb9xS7r0PBh2RbNtVH058dR7wvJI8krf4HVf57KR6KPdJPpPUborfJANcA3CgCUNBcWoEu8tkSyCDVx\" alt=\"\"><\/figure>\n<p>Then, from the newly cloned source, go to \u201cConfigure source\u201d. In the \u201cImage analysis\u201d panel, select \u201cResNet-18\u201d from the \u201cPre-trained CNN\u201d dropdown list and deselect \u201cHistogram of gradients\u201d, which was the default choice:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/H0RO-_fMrXEen35NqnjByTd_LM2rPpd6WrpLaDTU-qMjdvDY8lkzj-0FfK0iNuXwQN7vCT8Zmog7U1rufX8mostCZLNA8Ov23MpFwhtWentWr2KDOh8XcskzEIqPTz71zhqbQ2fb\" alt=\"\"><\/figure>\n<p>Rename the composite source to \u201clily-or-daylily resnet18\u201d. After the composite is updated, you can see that it contains 512 \u201cIMAGE FEATURES\u201d fields:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/FIinaC0wGseQCrBee7HdBZHWAMLhOPMIYfxNbmySUSsJd-wnlMDSBukVqxkTMEXAmA8wzH0c1p9sJOEEPy2Cnn1jiXvDo0m1Q8n3KNye89PfWodT5U2VuMlY5iB8mjb0SIT7A0tb\" alt=\"\"><\/figure>\n<h2 id=\"creating-datasets\">Creating Datasets<\/h2>\n<p>The composite sources are ready now. By using the 1-click dataset option in the cloud action menu, create two datasets, one from \u201clily-or-daily\u201d, another from \u201clily-or-daily resnet18\u201d:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/PoUXHYtA2hCNy9MGDJ-oOhD3Dzb9R0-5WlV1CO0fiMfFL8lNNlNdO-n2x2HZA_WbQtg_aPlOaYrG2-JJCqOfIIILm8PhUlPJb2Pdk5hwNWbVeTnzr05edlTPVc24h8U7MCwGHqRd\" alt=\"\"><\/figure>\n<p>After a dataset is created, in its detailed view you can see the field summaries, some univariate statistics, and the corresponding field histograms.\u00a0<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/N1DKp20jvFGGZiLIZjPt96-iUyivjHyRQ9wLaWTO0yGYBMG26KSB50cLF9M5ilrJyvKke0xoPM3uqjL0a1hFUVP-xIA-jVYTE5JzDRWItFMzXhk034vxt6r9MXQR_Fv-zfJX-kqO\" alt=\"\"><\/figure>\n<p>In the histogram of the image_id are handy mini previews of the images, which can be changed by reloading. You can easily see the distribution of the label classes from the \u201clabel\u201d field histogram. The red exclamation point denotes that the field of \u201cfilename\u201d is set to non-preferred automatically, which means it won\u2019t be used when training a model. On top of the field names, you can also see the \u201clabel\u201d was assigned as the objective field automatically.<\/p>\n<p>Image feature fields are hidden by default\u00a0to reduce clutter, because there are typically at least several dozen of them. There is an icon \u201cClick to show image features\u201d next to the \u201cSearch by name\u201d box, which I can click to see those fields.\u00a0<\/p>\n<p>Before you create models, split each dataset into two datasets so that you can use one to train models while using the other for evaluation. BigML provides a 1-click \u201cTraining|Test Split\u201d option, which randomly sets aside 80% of the instances for training and 20% for testing.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/TO48W55B3X9G9BWrgEWiDPV8vLVgftokgwWNDrHuG0a9skIuCV3sgMtgOstMB0aQfG8PwQy0D7Dy3aqvbhAAFCozr48OIOVzc-ZjTADZ6cuUHNtORgXqTkQGwibyHy936-mEWJ1u\" alt=\"\"><\/figure>\n<h2 id=\"creating-models\">Creating Models<\/h2>\n<p><a href=\"https:\/\/blog.bigml.com\/2017\/09\/26\/introduction-to-deepnets\/\"><strong>Deepnet<\/strong><\/a> is the BigML resource for deep neural networks. When creating a deepnet from a dataset containing images, it will train a particular type of deep neural network, a Convolutional Neural Network (CNN), and all extracted image feature fields will be ignored.<\/p>\n<p>From the training dataset \u201clily-or-daylily|Training [80%]\u201d, use the 1-click deepnet option from the cloud action menu to create a deepnet using default parameter values.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/ev15OTawPZBwMzOpma8MEuW5ehoOZjSZgBVmaPx6YONk4n4e7DqPnG8jwho6oYvNcXGh6tKcnUwTi6EkfklvhmyUESjYO4FM4_GeAruECqp0S2f95s9YZZZUMXLw1KBnacQ8fsGQ\" alt=\"\"><\/figure>\n<p>While CNNs are excellent in image classification, their training times can be long especially when the dataset has thousands of images or more. Image features generated by pre-trained CNNs can capture sophisticated features and are therefore effective for both supervised and unsupervised models. For image classification, you can use image features to train other supervised models such as for ensembles or logistic regressions, which usually take much less time.<\/p>\n<p>Following this logic, the dataset \u201clily-or-daylily resnet18\u201d has 512 image feature fields generated by pre-trained CNN \u201cResNet-18\u201d (Having tried five different pre-trained CNNs available, I decided to use ResNet-18 for this application). From its training set \u201clily-or-daylily resnet18|Training [80%]\u201d, create a 1-click logistic regression.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/F6Q376vgJdPUywG0R-7E53hd4RVA1h1WVjfIDZgLyrVYM6sAS0Dl2xgOJOENv-ODaDQvVDDhilTqVKlU23GhVhzDrJWAUiYo9KRbdVgqUDoXUJVeeRA4UDwuglCnlacKhy3Xx7Py\" alt=\"\"><\/figure>\n<h2 id=\"evaluating-the-models\">Evaluating the Models<\/h2>\n<p>After the deepnet \u201clily-or-daylily|Training [80%]\u201d is created, you\u2019re presented with the image deepnet page:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/Xy1PVFeGufgSEW0PP75KfnYDjutOcm41TGQsnxcskg7XshZW8mnoSvVS2awsGpXE1fKu_x1u-pVIWoGCU_8FadQw6gsjM76CGnU6eUNJvhxak4Q9FjZ2l9bKkks3aPwP3tfgUIKx\" alt=\"\"><\/figure>\n<p>You can see lots of useful information about the deepnet, including its algorithm and parameters. The main focus of the page is the performance from a set of sampled instances. This set of instances were used during the deepnet training for validation. You can go over the images that were classified correctly, as well as the images classified incorrectly. However, this is not a true evaluation. In order to measure the deepnet\u2019s true performance (how good it is at classifying images not seen in training) you need to create a BigML evaluation. For this, go to the cloud action menu and click on \u201cEVALUATE\u201d,\u00a0<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/Ze2FVs7PyFFKLueN_MzNTgDsWzDdb8u1wPMzov--FTJASUjkxvwDpD295XTgBlBtr7CAW_DiuTtIuh3X4FlQ_GAxKVOaJLwamDVQpLWxQlGFxgXPK_QxdR3PsgFz5_SZqMdeOlZU\" alt=\"\"><\/figure>\n<p>You can see:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/AgWnGHOX_eN9zxX3-SnfBIyajx3FpK0EeegHKQgdnXr-1vpfLX9nC3G9LtlO1dQ2Apr_EY0DtPX_rXekjQtGCxrmrru5yZqi6E6PhA3blBtcvnxKVKbKJjoFqWq4zpDxtwxqeOEE\" alt=\"\"><\/figure>\n<p>The test dataset is the one we split from the original dataset and it has 44 instances, about 20% of the 216 images. The 80% training dataset we used for creating the deepnet has 172 instances. Click on the \u201cEvaluate\u201d button to create an evaluation:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/VUhWlN1TyX8VW03jHuOBQ7k7iq8c2WnBcnMSY0eNdzl3zDp5SsOru_vnozHeuZeXcpNee340ioR31sW4KO_ViZ2pt61L4ZYOqWZORYwqVNLGh_lEImPiNAIhvwXE4AnzBLgeFz7g\" alt=\"\"><\/figure>\n<p>The overall accuracy of the deepnet is 93.2%. We also did an evaluation of the logistic regression \u201clily-or-daylily resnet18|Training [80%]\u201d:<img decoding=\"async\" loading=\"lazy\" width=\"624\" height=\"481\" src=\"https:\/\/lh5.googleusercontent.com\/_Odr7GbNVxdhMfpS_psmqhajPLCMDn4nYznlalK-_-pvapdw2--2pMmpmnvLcBAU47GGmtAubE19MBDgDKMw7ITt_kgpy56UmYpeiZdeOlvAk_memJ6ghX1u3rg4InpbOSqMIDGu\"><\/p>\n<p>The accuracy of the logistic regression is still very high at 90.9%, just a bit lower than the accuracy of the deepnet. But keep in mind: when the number of images reaches thousands, deepnets can take hours to train while logistic regressions take only minutes.<\/p>\n<h2 id=\"classifying-new-images\">Classifying New Images<\/h2>\n<p>I took 9 pictures of flowers around my neighborhood, and now I\u2019m eager to classify them. Here, we\u2019ll only show the steps to use the deepnet. It\u2019s very similar if using the logistic regression, but just remember to configure new images to have \u201cResNet-18\u201d image features.\u00a0<\/p>\n<p>Drag \u2018n drop the new pictures to the BigML Dashboard, they will be created as image sources.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/9BflVq1n9NetX6JTLCeOmKoBG1n-f3jPRfyQfYQZ8T5Xlxv2Gd3_MQbyWG4om8bv5-QbFbZMwRnk6Kr-BoZMCM_4-C56ssZYzdeE09yqugQRl33R62P7EzvpRg2XI8MiwAuiVW0G\" alt=\"\"><\/figure>\n<p>From the deepnet page, click on the \u201cPREDICT\u201d option on the cloud action menu:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/iqGssyYc0SX4NVm5WMRozclaeB97UQhJpdqDn1JzueXemuuCl0B6w2jfw8_oY0dPALtge_brTvs8O6svr4A0898pVM1GClYoI6nV9tGMhJzvJLLZPsGU65T38UKnbOQD8Hi85w4F\" alt=\"\"><\/figure>\n<p>This brings the \u201cprediction form\u201d of the deepnet we created, where you can select an uploaded image and classify it.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/171G5jPr6TGJsMcJdmTieskZf2oOW2koNl_gT_B7nmquHSv1f9b8uMSqSlrHibPhudnpXhM30sJ6vDOGW77OxJJREepVyZ_uGmXQm5C-fF9TszjxLAZJQnR4nn0c1O24hgqgUDxe\" alt=\"\"><\/figure>\n<p>Use the \u201cSelect image\u201d dropdown menu item, pick the image you want to classify, and then click on the \u201cPredict\u201d button.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/LYv64tfhdNgjbSKyH9345Dfm4BbuH2BhFAfU8ROxbC6u60-M4jO7itHnw8tsaKsUnq6-HhTljf1PrSdYKWmoMbiwRTPM9k9Z6eyUAgCJD4gi5l422qIvpleGMxjsrCHz7oyV71tD\" alt=\"\"><\/figure>\n<p>You can see that the selected image is classified as lily with a probability of 99.28%. Another one was classified as daylily with its probability at 91.31%.<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" data-attachment-id=\"30118\" data-permalink=\"https:\/\/blog.bigml.com\/lily-prediction-2\/\" data-orig-file=\"https:\/\/littleml.files.wordpress.com\/2021\/11\/lily-prediction-2.png\" data-orig-size=\"948,651\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"lily-prediction-2\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/littleml.files.wordpress.com\/2021\/11\/lily-prediction-2.png?w=300\" data-large-file=\"https:\/\/littleml.files.wordpress.com\/2021\/11\/lily-prediction-2.png?w=810\" src=\"https:\/\/littleml.files.wordpress.com\/2021\/11\/lily-prediction-2.png?w=948\" alt=\"\" class=\"wp-image-30118\"><\/figure>\n<p>Of course, you can also use BATCH PREDICTION to classify multiple images. First, you need a dataset containing the images so click on \u201cCreate composite source\u201d on the action bar in the source list view:<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" data-attachment-id=\"30117\" data-permalink=\"https:\/\/blog.bigml.com\/lily-create-composite\/\" data-orig-file=\"https:\/\/littleml.files.wordpress.com\/2021\/11\/lily-create-composite.png\" data-orig-size=\"948,183\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"lily-create-composite\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/littleml.files.wordpress.com\/2021\/11\/lily-create-composite.png?w=300\" data-large-file=\"https:\/\/littleml.files.wordpress.com\/2021\/11\/lily-create-composite.png?w=810\" src=\"https:\/\/littleml.files.wordpress.com\/2021\/11\/lily-create-composite.png?w=948\" alt=\"\" class=\"wp-image-30117\"><\/figure>\n<p>Select the uploaded images as the components. See that I took 9 pictures here:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/1zXl3_r0ON3DXtbYHUuZp8Er8EBEhOXLiSEzRYG2ndPySgZCJJW3bVH_CCl6rj-aTAg24pgpitUdF8_Tk82c8F40Ww0SnO47D9gZ5fzqGvpXXn3SyOYoZ42zh4-1-67gBzjFcjSB\" alt=\"\"><\/figure>\n<p>Then you can create a composite source, naming it \u201cneighborhood-flowers\u201d:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/tzxfAfmr9jUtVN_05zoQvBPQ9jKqLu4UXUA5vpdtY-W6DynD0pDDotQhL-eda68xaHnUmrHuG66MaZ_b4oL2wS0LOAwryFkFAqWvC4ZNUmyvkyDho61pEFtu0WLucBrHBG-8F4gj\" alt=\"\"><\/figure>\n<p>After creating a 1-click dataset from the composite source, go to our deepnet, and find in the cloud action menu \u201cBATCH PREDICTION\u201d:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/G4KNmYvmxBLSzwYuL8VtxZKdm2gtNoiK7r18gEWNfufOgoypjzrEsvIG1OoCPomXre5wuFR7T6-n-jYdwvIhB1jq3ESgYCAzNdEk3ipiTQk94du00hMLU_Ad53MF_SgLJitlHoxG\" alt=\"\"><\/figure>\n<p>Pick the \u201cneighborhood-flowers\u201d as the dataset, then configure the batch prediction output \u2014 you only need those fields of \u201cimage_id\u201d, which link to the image \u201cclassified\u201d (or predicted) and its probability.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/yUZgczEdpTIeABE5AWRWDX-jIKDbSTEagbzBK3oWSS-o6jf_oZm1FjWtZQAfR5waxnhPfDMdyNJPgyCysW3z6iaRJsLy5szEtdtlA1rHW4vQ1n2QwjQK1lkgUVzRhSFOwZIHiGbz\" alt=\"\"><\/figure>\n<p>Next, click on the \u201cPredict\u201d button to create the batch prediction:<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/zi86qmuRvrNafTXR4nY4IB_4A0RYNKBC0rIvmflllQNvpfhNyV7f-tVYVwVUQ_yke4QugWkR5N1YZ_y20byrqMuuhA1YS7Mz81C5oB9fc0HhUFVHB0nmRKaUicU0juynTGdxTA2F\" alt=\"\"><\/figure>\n<p>You can download the batch prediction as a CSV file. By default, BigML also generates an output dataset containing the batch prediction results. Clicking on the \u201cOutput dataset\u201d button will jump to the dataset view. As an aside, the \u201cScatterplot\u201d view of the dataset is very good for inspecting the results, which displays thumbnail images and their classified labels at the same time.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/GahcD0g9E_x3aAxHCnabw1tMXplITx2fyO8x7gIj-cMIoOGms-AGTIA6rCpWDZrYMaPqnrxAj3EFgcBAwpruz3JEZwIJVFXnD4x-O8BKY3N01ZPdvUmSDPv42CaMyxkBinM4W6UM\" alt=\"\"><\/figure>\n<p>In the \u201cScatterplot\u201d view, each instance is plotted as a dot on the chart. Two colors represent two classes and the probabilities fall between 0.90 and 1.00. If you mouse over a dot, its images, its classified label, and its probability are highlighted in the \u201cDATA INSPECTOR\u201d panel on the right. <img decoding=\"async\" loading=\"lazy\" width=\"624\" height=\"536\" src=\"https:\/\/lh6.googleusercontent.com\/IJ2hqrx9CpQwfiX77uW_9X8P3CmIQeW6VTgefnWlwsxOLNggfc4buEa6YDskW0oF6Mb_rlcGLmJIdVtopQ6onSvDAIvPW6tOCEPWJOg7CpHnhASQ52t6CWuc5PRJfzQA82PuITpr\"><\/p>\n<p>I\u2019m happy to report that after completing these steps, I checked with my neighbors who planted those flowers and they confirmed that the classification results from the 9 pictures were all correct. Mission accomplished!<\/p>\n<h2 id=\"summary\">Summary<\/h2>\n<p>We set out to build an image classifier on the BigML Dashboard to help identify two similar flower species. We downloaded the images from the Internet and used them to create two composite sources and their datasets. From one dataset, we created a deepnet which is a Convolutional Neural Network. Another dataset has a set of pre-trained CNN image features, and we used it to create a logistic regression. Using the pictures I took around my neighborhood, both models helped me classify lily\/daylily with high accuracy. The whole process shows how easily, quickly, and accurately image classifiers can be built on the BigML Dashboard. This is remarkable:\u00a0knowing nothing about flowers except how to download images of them, we were able to create a computer program to classify them accurately with the Machine Learning power of the BigML platform.<\/p>\n<h2 id=\"do-you-want-to-know-more-about-image-processing\">Do you want to know more about Image Processing?<\/h2>\n<p>Be sure to visit the <strong><a rel=\"noreferrer noopener\" href=\"https:\/\/bigml.com\/releases\/image-processing\" target=\"_blank\">release page<\/a> <\/strong>of BigML Image Processing, where you can find more information and documentation. There are also links to other blog posts on related topics, such as <a href=\"https:\/\/blog.bigml.com\/2021\/11\/24\/composite-sources-in-bigml\/\"><strong>composite sources<\/strong><\/a> and <strong><a href=\"https:\/\/blog.bigml.com\/2021\/11\/29\/the-many-ways-of-labeling-images-on-the-bigml-platform\/\">image labeling<\/a><\/strong> for your convenience. Feel free to join the FREE <strong><a href=\"https:\/\/attendee.gotowebinar.com\/register\/3316692637331486991\" target=\"_blank\" rel=\"noreferrer noopener\">live webinar on Wednesday, December 15 at 8:30 AM PST \/ 10:30 AM CST \/ 5:30 PM CET<\/a><\/strong>. Register today, space is limited!<\/p>\n<div id=\"jp-post-flair\" class=\"sharedaddy sharedaddy-dark sd-like-enabled sd-sharing-enabled\">\n<div class=\"sharedaddy sd-block sd-like jetpack-likes-widget-wrapper jetpack-likes-widget-unloaded\" id=\"like-post-wrapper-30283844-30114-61a883f7d427c\" data-src=\"\/\/widgets.wp.com\/likes\/index.html?ver=20211111#blog_id=30283844&amp;post_id=30114&amp;origin=littleml.wordpress.com&amp;obj_id=30283844-30114-61a883f7d427c&amp;domain=blog.bigml.com\" data-name=\"like-post-frame-30283844-30114-61a883f7d427c\" data-title=\"Like or Reblog\">\n<h3 class=\"sd-title\">Like this:<\/h3>\n<p><span class=\"button\"><span>Like<\/span><\/span> <span class=\"loading\">Loading&#8230;<\/span><\/p>\n<p><span class=\"sd-text-color\"><\/span><a class=\"sd-link-color\"><\/a><\/div>\n<h3 class=\"jp-relatedposts-headline\"><em>Relacionado<\/em><\/h3>\n<\/div><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blog.bigml.com\/2021\/12\/01\/building-a-simple-image-classifier-on-the-bigml-dashboard\/<\/p>\n","protected":false},"author":0,"featured_media":1298,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1297"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=1297"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1297\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/1298"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=1297"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=1297"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=1297"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}