{"id":1275,"date":"2021-11-30T08:28:34","date_gmt":"2021-11-30T08:28:34","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2021\/11\/30\/the-many-ways-of-labeling-images-on-the-bigml-platform\/"},"modified":"2021-11-30T08:28:34","modified_gmt":"2021-11-30T08:28:34","slug":"the-many-ways-of-labeling-images-on-the-bigml-platform","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2021\/11\/30\/the-many-ways-of-labeling-images-on-the-bigml-platform\/","title":{"rendered":"The Many Ways of Labeling Images on the BigML Platform"},"content":{"rendered":"<div>\n<p>BigML\u2019s <a href=\"https:\/\/blog.bigml.com\/2021\/11\/18\/image-processing-easily-solving-image-data-driven-business-problems\/\"><strong>upcoming release on Wednesday, December 15, 2021<\/strong><\/a>, will be presenting a new set of Image Processing resources to the BigML platform! To warm up for the release, we already saw an <strong><a href=\"https:\/\/blog.bigml.com\/2021\/11\/22\/introduction-to-image-processing\/\">introduction to the basic concepts<\/a><\/strong> of Image Processing as well as how BigML, with <strong><a href=\"https:\/\/blog.bigml.com\/2021\/11\/24\/composite-sources-in-bigml\/\">composite sources<\/a><\/strong>, lets you use your image data to build any Machine Learning model. In this post, we show you four different ways to label images on the BigML platform.<\/p>\n<p>Image labels are important in Machine Learning. Particularly, they are indispensable when solving image classification problems. As such, BigML provides flexible ways to get your images labeled.<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" data-attachment-id=\"30112\" data-permalink=\"https:\/\/blog.bigml.com\/labeling_images_bigml\/\" data-orig-file=\"https:\/\/littleml.files.wordpress.com\/2021\/11\/labeling_images_bigml.jpg\" data-orig-size=\"1200,630\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"labeling_images_bigml\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/littleml.files.wordpress.com\/2021\/11\/labeling_images_bigml.jpg?w=300\" data-large-file=\"https:\/\/littleml.files.wordpress.com\/2021\/11\/labeling_images_bigml.jpg?w=810\" src=\"https:\/\/littleml.files.wordpress.com\/2021\/11\/labeling_images_bigml.jpg?w=1024\" alt=\"\" class=\"wp-image-30112\"><\/figure>\n<h2 id=\"1-labeling-images-by-folders\">1. Labeling Images by Folders<\/h2>\n<p>A common practice in the industry is to group images by folders, with the folder names being their labels. This is indeed the most straightforward way.\u00a0<\/p>\n<p>For instance, you may organize your training data by putting the image files into subfolders like this:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh4.googleusercontent.com\/6f2yTtmvNO5NfNUyYBJy6Y2lUQjAspIojfMToVdnC59mGdjaXFuFbz-33Vea_uvlO1F3HVdFfoRrvJPl0Er3m-YiaBdnrxAYXyqaQAoGWRUb38KA86KfWR-q7daLxc2GjcpyF8vx\" alt=\"\" width=\"369\" height=\"505\"><\/figure>\n<p>So all your grape images or the images you want to label with \u201cgrape\u201d are in the \u201cgrape\u201d folder and all your strawberry images are in the \u201cstrawberry\u201d folder.<\/p>\n<p>Now, you can create a zip file by compressing the two folders together:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh5.googleusercontent.com\/ypCXv45bdZt286yrMomG8XlJz89OyRDUICEw1ttu-3drt3BT3KSqi2OTUp-Bf0jh5B3utGkSrLQ0pVEAv_jCV-Y0T0-TIeSQgI9I_xgxzYrRS5hZpjmPyZx6Bd0kVNISA8A-mXUr\" alt=\"\" width=\"648\" height=\"573\"><\/figure>\n<p>Or on the command line, assuming you are in a directory whose immediate subdirectories are <em>grape<\/em> and <em>strawberry<\/em>, you can use a command like:<\/p>\n<pre class=\"wp-block-preformatted\">zip -r grape-strawberry.zip grape strawberry<\/pre>\n<p>to create the zip file.<\/p>\n<p>After uploading the zip file to the BigML platform, <a rel=\"noreferrer noopener\" href=\"https:\/\/blog.bigml.com\/2021\/11\/24\/composite-sources-in-bigml\/\" target=\"_blank\"><strong>an image composite source<\/strong><\/a> will be created. Go to the \u201cFields\u201d view tab and you will see the image label as one of the fields in the composite source. They match the folder names.<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh6.googleusercontent.com\/RQ7iktmIJNSOj_duBDF_QtKoAwoOM8fIqZ21Zp-PIlJJzOKkBTKV0BZvnrd2LP7hJvGelXALQCnPNLD2kjxmechfQrVHyaKaRtTfc9n6A8fakS7__OOp1upBn-1qv85hSbPuUV-T\" alt=\"\" width=\"732\" height=\"274\"><\/figure>\n<p>Go to the \u201cImages\u201d view tab, you will see all images have been properly labeled:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh3.googleusercontent.com\/zwl70O29msu0FFq483bEpuBboqKFq7thjgwT_a7YEKdswOLikxG6CzfuAimMEONAcEiQmVHyB0opIIY4u1dSjrMYdThfx5PbCPhPqMcirrpUyMhKQZ8LByEH9ZelNtXIXxIyNqgz\" alt=\"\" width=\"674\" height=\"644\"><\/figure>\n<h2 id=\"2-labeling-images-on-the-dashboard\">2. Labeling Images on the Dashboard<\/h2>\n<p>If you didn\u2019t organize your image files in folder structures and have already uploaded them, don\u2019t worry. BigML Dashboard provides an interactive way to label your images. You can create an image composite source by uploading an archive file (a tar or zip file). Because the files are not in a folder structure, the composite source doesn\u2019t have a label field:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh4.googleusercontent.com\/jxHb1rk1b9caFPhRvl0EpXnH_VrhZSRhnJNs6lb2M5K4dWMLWkGtfd7SL9gdxGhI4Vs-f9YUTQez-x7tte4Tnh9LYXMNEUgLFv4NPXeYei4Y4idADkU8IKejF7nKxQTZgWT46yAN\" alt=\"\" width=\"706\" height=\"289\"><\/figure>\n<p>If you go to the \u201cImages\u201d view tab, it shows all the images in the composite source.<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh5.googleusercontent.com\/D-aaop-Nij8nac26U9D43UuaX4t9CAbr3TlvX1tCsOz-_wKIDg4CwOSfQvdscP571JVfgjw8UxJhqIbzOzB-bv_1Uehn-e_FuEL8kt3Z7ZWTUGadIwX4A5SUejVAjzNCsK1qdwIz\" alt=\"\" width=\"706\" height=\"688\"><\/figure>\n<p>Besides viewing all the images, you can select them to perform certain operations. One of the operations is to \u201cLabel images\u201d.\u00a0<\/p>\n<p>Before images can be labeled, you need to create a label field. You can do this by adding the field before selecting the images to label. On top of the images, on the left is a \u201cLabel field\u201d textbox. To add a label field, click on the \u201c+\u201d next to the textbox and you will be prompted with a dialog box asking for the field name and the field type.<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh4.googleusercontent.com\/aFuEaW3-FJiBI0jCHG-TuSHZRZ4lDXfSVOELyZRiM7SNt6YToj5k90j6Okyoe60mCvFdDIagGH52adm7ON1A9rPh-s3mJgm9FC3ux-6_sw4XZ93shT5KFc0OTiPCxu3Oh8cr-_8s\" alt=\"\" width=\"722\" height=\"485\"><\/figure>\n<p>After entering the field name and selecting its type from the dropdown, click the \u201cAdd\u201d button to create the label field.<\/p>\n<p>When making image selections, you can use the \u201cSelect all images\u201d checkbox on the top right to select all images. You can also use the \u201cSearch by name\u201d box, which acts as a name filter. That is when a text string is typed into the box, all images whose names contain the string are shown and can be selected. For example, I typed \u201cimage_\u201d in the textbox, and all images whose names contain \u201cimage_\u201d showed up in the view. In fact, all of them were images of strawberries:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh4.googleusercontent.com\/FKz1ltyoNM9tBR0q2bRfIegSfhOHmm-n69U0AuycYG3o_jQIqyScnujf6hlAbesS840oKLCJdmq00Meq4gETLl6SSBXmVNEXy67jIhQmlOtnPXHQpqGeql6IbMa6SQOQHg9_jZ-B\" alt=\"\" width=\"728\" height=\"846\"><\/figure>\n<p>I selected all of them and clicked on the \u201cLabel images\u201d button at the bottom right. This labeled all selected images by giving the label field a value, such as \u201cstrawberry\u201d shown below:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh6.googleusercontent.com\/qcTfblySZdiHcJQGI2gdfunPF4CtC9MrwOwSzwL8ipr4ZUULMGHwNdwmoBGV2oikmCFy4vnXH5fa74lzcYZy1vt4kKi5ij9wuROZtF0Oybqtf3tsT-2a2jedCgQu8-eTQvtomu7u\" alt=\"\" width=\"650\" height=\"516\"><\/figure>\n<p>I can do the same for the grape images, by selecting all of them and then adding the label value. After this, all images will be labeled.<\/p>\n<h2 id=\"3-providing-labels-by-using-table-image-composite-sources\">3. Providing Labels by Using \u201cTable+Image\u201d Composite Sources<\/h2>\n<p>In some real-world scenarios, images and their labels are prepared separately. They are in separate files. For instance, there could be a collection of images while their labels are in a CSV or JSON file.<\/p>\n<p>In addition to labels, CSV or JSON files can provide other information about the images, such as captions, comments, geo-coordinates, etc.\u00a0<\/p>\n<p>To accommodate this scenario of using a separate file for image labels and other information, BigML provides composite sources of format \u201cTable+Image\u201d. Here \u201ctable\u201d refers to CSV or JSON files because they provide extra information in tabular formats.<\/p>\n<p>As suggested by the name of the format, there are two parts to your data. One is a collection of images, another a table file that is a CSV or JSON file. In the case of CSV, one or more of its columns refer to the images. These columns will become fields in the composite source created and have the optype of path, which contains the file names of the corresponding images. Other columns will contain information about the images, such as labels.\u00a0<\/p>\n<p>Here is an example of a zip file containing 66 files. Using the command<\/p>\n<pre class=\"wp-block-preformatted\">unzip -l grape-strawberry+table.zip<\/pre>\n<p>we can see its file list (each ellipsis \u2026 represents the files omitted for brevity):<\/p>\n<pre class=\"wp-block-preformatted\">Archive:  grape-strawberry+table.zip\n  Length      Date    Time    Name\n---------  ---------- -----   ----\n    37567  11-25-2020 21:56   092_0001.jpg\n    20750  11-25-2020 21:56   092_0002.jpg\n    19104  11-25-2020 21:56   092_0003.jpg\n                              ...\n    85537  11-25-2020 21:56   092_0030.jpg\n     1822  12-06-2020 22:26   grape-strawberry.csv\n    20354  05-17-2020 20:57   image_0001.jpg\n    10074  05-17-2020 20:57   image_0002.jpg\n    11772  05-17-2020 20:57   image_0003.jpg\n                              ...\n    21629  05-17-2020 20:57   image_0035.jpg\n---------                     -------\n  1281536                     66 files\n\n<\/pre>\n<p>Among the 66 files, 65 are jpg image files and one is the CSV file <em>grape-strawberry.csv<\/em>, which contains (again, ellipsis for brevity):<\/p>\n<pre class=\"wp-block-preformatted\">\"image\", \"label\"\n\"092_0001.jpg\", \"grape\"\n\"092_0002.jpg\", \"grape\"\n\"092_0003.jpg\", \"grape\"\n...\n\"image_0033.jpg\", \"strawberry\"\n\"image_0034.jpg\", \"strawberry\"\n\"image_0035.jpg\", \"strawberry\"\n<\/pre>\n<p>There are two columns in the CSV, \u201cimage\u201d is for references to the image files, which are their filenames. the \u201clabel\u201d column is for the labels of the images.<\/p>\n<p>Uploading the zip file will create a \u201cTable+Image\u201d composite source. In its \u201cSources\u201d view below, you can see that the CSV becomes a component source:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh6.googleusercontent.com\/rhJm9crTZORshym8GSCTq-zKH94mhI9Tt5LmItD1wK3CId4H8P2ArbNpU5FRKvymyQrAF30m6tsi8pBsmn8NNTmOiNiA-2a7w7sZ3jX0Igah1JcTlwpxI9P4zda6yF3DTKVUq0A_\" alt=\"\" width=\"709\" height=\"485\"><\/figure>\n<p>In its \u201cFields\u201d view, you can see that the image labels have been added, which is a field from the CSV:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh3.googleusercontent.com\/BLNXXO2MTgeZW0qv5D5CbX5FMGAtwzg4e3D_J2lKXsm0Zqtzykdj6DTqFqv9uJJZcZHEFDhUyz2O2S5SzFfd9uUBSHNq_JwEIS5Rh_QN6a-PlrVcLNTHU9IqUFQaMOK-dzIVSDqk\" alt=\"\" width=\"1039\" height=\"375\"><\/figure>\n<p>\u00a0<\/p>\n<h2 id=\"merging-separately-labeled-images\">4. Merging Separately Labeled Images<\/h2>\n<p>Sometimes images are labeled separately. In certain applications, one class of images are extracted from a set of data and uploaded to the BigML platform. Then, another class of images are uploaded. There can be many classes of images that are prepared and uploaded at different times, and they should be merged together to create datasets for Machine Learning. <strong><a href=\"https:\/\/blog.bigml.com\/2021\/11\/24\/composite-sources-in-bigml\/\" target=\"_blank\" rel=\"noopener\">BigML composite sources<\/a><\/strong> are great for merging images.<\/p>\n<p>As a refresher, a composite source is a collection of other sources which are called component sources. The power and flexibility of composite source lies in its ability of allowing many types of component sources. When all component sources are images, such a composite source is called an image composite source. Component sources can be composite sources too, this is when a composite source can be used for merging different sets of images.<\/p>\n<p>To illustrate how it is done, we upload two zip files, which contain two classes of images respectively. In the first zip file, all images are inside a folder <em>grape<\/em>. And in the second zip, all images are inside a folder <em>strawberry<\/em>. As seen in the previous sections, when a zip file of images is uploaded to the BigML platform, an image composite source is created.\u00a0<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh5.googleusercontent.com\/IW9azKwkwjRqWeZanGQCF4jdHazcpMfQ-P1x83CFW6pu9VubKeQ61BHQM8gP_0eF6NLOTTT14qU8pLWHPdh84eAMwOcbRMzelD1pBJSKSW1Wd7olKAFEtu9gxx6lKuNKkqlmpHo6\" alt=\"\" width=\"721\" height=\"203\"><\/figure>\n<p>If we click on one of the composite sources, we will see its \u201cFields\u201d view as below. Here we can see all its fields, including the label field.<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh4.googleusercontent.com\/tCDL-xlYwy2pOasFE6kqb6C_U_oV8qyA1xSIi7437p6JSHRZrCZAowWv_aISbusItStt3o_f_3Nv5mCIyEHnUHFbLPtg7VpoBv5koNTw4wax6CSzMvXFiwZ0EcLCjQU6J-Cj1ELl\" alt=\"\" width=\"714\" height=\"300\"><\/figure>\n<p>Below is the \u201cSources\u201d view of the composite source, where we see its component sources \u2014 all images.<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh5.googleusercontent.com\/FXDT-xxl4TcOOS2sTIPSCHhWWZO95foG1bp3iB1ccnhvX_o9d17aC84jsd86wWKdN2QIQBiecGhVeHwU4hBytr4h3WZKugy3D7g_XqmK-OU30uERoZY0odhfPzLdSLIL_UWn2iTH\" alt=\"\" width=\"709\" height=\"469\"><\/figure>\n<p>Now we want to merge those two classes of images by creating a new composite source from the two just uploaded. We first make sure both have the same fields. Comparing the \u201cFields\u201d view of the strawberry image composite below with the grape one shown above, we see that they have the same fields \u201cimage_id\u201d, \u201cfilename\u201d and \u201clabel\u201d. They also have the same image features: Histogram of gradients, which has 234 fields.\u00a0<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh4.googleusercontent.com\/WyZKTl7nfX0aM4shBdnC37CkwluUz4gUip9PaVlkoInMBKcB1FWAGuVJkt7JJoxuGz2_Vdhq3fkbBMOJqsmKhuuOO-_ydlGW9bwFbED1qbHufgbtL8G6j9QPaXNZOXbtwu6YV5YT\" alt=\"\" width=\"717\" height=\"308\"><\/figure>\n<p>If any of the fields is different or the fields are in different orders, we would not be able to create the new composite source as desired.<\/p>\n<p>After confirming that the two composite sources have the same fields, we can close them by going to the cloud action icon in the title bar and clicking on \u201cClose This Composite Source\u201d:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh5.googleusercontent.com\/hkGlN4BwvCvp_b2fYuhezngna7yaJ0QmVHkb3UYPwa3tSsO4KIjSxDg8p9dnsgZH9IglxMXYoOLrrYpJQW1HiuzwssmLLiUR7LBkouJOCEcMn6JvvkXVAmNlC8c3QfKo9sO-gj9v\" alt=\"\" width=\"709\" height=\"324\"><\/figure>\n<p>All composite sources are created as open, which means they are modifiable. Closing a source makes it not modifiable anymore. Only closed composite sources can be component sources of another composite.<\/p>\n<p>Now that the two composite sources are ready, we click on the \u201cCreate Composite Source\u201d icon on the title bar of the source list view:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh5.googleusercontent.com\/10I3pwXHq2t-mMqS_m4cDHI-rK59Osub7u9l2xlz6OUeuXLI7XWKv7xjZxNZm7qY6TqEFDlleOd-crVFIVlbcWxnf65BmzU-jriKm85EIBH-z0owBBgMhSqYFac_TWYrpjRZ04Nd\" alt=\"\" width=\"701\" height=\"168\"><\/figure>\n<p>Then we select the two composite sources we want to merge and click on the \u201cCreate composite source\u201d button on top of the list:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh3.googleusercontent.com\/SITOBFXxst1aEEibWExmwBlvh5sDaE_oH8Ecz-w83yGg6YMSb_rcz-qSuMX_he3Yio6tzFDYmI9M8piYnk1-DYGl_AN1RWaiNta0QWEi5_LGGC6eUfq6KXlVd7F_C_dxDw1qlXv_\" alt=\"\" width=\"704\" height=\"198\"><\/figure>\n<p>After giving a name, the new composite source is created:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh3.googleusercontent.com\/HEBRkRKrwI8mpMwb3qbvCaShbQlTVVptacGAt38oUZ73Ufkjmgj20-XRTjrpxFMBmmqucw4JLJYJqVN-EcTbmSRqhXSamOhxWvraHRbTXmy92fu60YOnc-k-kgPQYHjVBVRoO3tU\" alt=\"\" width=\"704\" height=\"320\"><\/figure>\n<p>We can see the new source is still an image composite source, and it inherits all the fields including the image features from those two composite sources, which become component sources as shown below in its \u201cSources\u201d view:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh3.googleusercontent.com\/TbdigxksptUI7eHk78LiSa8c8iBl4BZGkbYFNCa96tF6KQrzhW8FDfIJQCUeFUUSiX8Ucn10S-A82ftzBVmBHaj0n1s4jpZpZi5Mumamo8UmCzN8YHhcD7FJLJPg9OdD114u9pVy\" alt=\"\" width=\"707\" height=\"272\"><\/figure>\n<p>Now we create a 1-click dataset from the new composite source, and we see two classes of images marked by its label histogram:<\/p>\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/lh3.googleusercontent.com\/W1pqAimKGFgTI9GiIIdsv0UOzFGcHQlPVtQd1xtDrBTNZRYgJE1-gdbat1jKKkVDOChy5n2Bl5S3FnJP5U9o51mlMfzba_r-D7hW82sTbAFUgWJHfb8pO27U7VvwHTWNdO0P860-\" alt=\"\" width=\"705\" height=\"296\"><\/figure>\n<p>As you see, BigML provides many flexible ways to label images in order to accommodate different business needs and use cases. In fact, there are even more advanced ways to label images on the BigML platform. For instance, you can label images programmatically via the <strong><a href=\"https:\/\/bigml.com\/api\" target=\"_blank\" rel=\"noopener\">BigML API<\/a><\/strong>. You can also use BigML\u2019s powerful <strong><a href=\"https:\/\/github.com\/bigmlcom\/flatline\">Flatline<\/a><\/strong> tool to extract certain properties from different fields of a dataset and add them to label fields.\u00a0<\/p>\n<p>\u00a0<\/p>\n<h2 id=\"do-you-want-to-know-more-about-image-processing\">Do you want to know more about Image Processing?<\/h2>\n<p>Please\u00a0<a href=\"https:\/\/bigml.com\/releases\/image-processing\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>visit the dedicated release page<\/strong><\/a> for more information and documentation, and<strong>\u00a0join the FREE live webinar on Wednesday, December 15 at 8:30 AM PST \/ 10:30 AM CST \/ 5:30 PM CET. <a href=\"https:\/\/attendee.gotowebinar.com\/register\/3316692637331486991\" target=\"_blank\" rel=\"noreferrer noopener\">Register today<\/a>, space is limited!<\/strong> Stay tuned for the next blog post of our series that will be about how to build a simple image classifier on the BigML Dashboard!<\/p>\n<div id=\"jp-post-flair\" class=\"sharedaddy sharedaddy-dark sd-like-enabled sd-sharing-enabled\">\n<div class=\"sharedaddy sd-block sd-like jetpack-likes-widget-wrapper jetpack-likes-widget-unloaded\" id=\"like-post-wrapper-30283844-29976-61a5e0b16719c\" data-src=\"\/\/widgets.wp.com\/likes\/index.html?ver=20211111#blog_id=30283844&amp;post_id=29976&amp;origin=littleml.wordpress.com&amp;obj_id=30283844-29976-61a5e0b16719c&amp;domain=blog.bigml.com\" data-name=\"like-post-frame-30283844-29976-61a5e0b16719c\" data-title=\"Like or Reblog\">\n<h3 class=\"sd-title\">Like this:<\/h3>\n<p><span class=\"button\"><span>Like<\/span><\/span> <span class=\"loading\">Loading&#8230;<\/span><\/p>\n<p><span class=\"sd-text-color\"><\/span><a class=\"sd-link-color\"><\/a><\/div>\n<h3 class=\"jp-relatedposts-headline\"><em>Relacionado<\/em><\/h3>\n<\/div><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blog.bigml.com\/2021\/11\/29\/the-many-ways-of-labeling-images-on-the-bigml-platform\/<\/p>\n","protected":false},"author":0,"featured_media":1276,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1275"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=1275"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/1275\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/1276"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=1275"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=1275"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=1275"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}