9 Ways Labeling Companies are wasting your money

full-thumb

As a seasoned provider of labeling services ourselves, the price tag seemed exorbitantly high. There is a difference between making a healthy margin and taking clients for a ride. This got me thinking of the many different ways labeling providers are billing clients more than they need to. Here are 9 ways clients end up paying more for labeling services than they should have:

1. Large MOQ: 

A lot of 'leading' players in the industry are so profit-hungry that they would put a floor on order value. You would be asked if your annotation project would incur a minimum of X thousand USD (often 10<x<70). These profiteering ventures forget the basic playful/exploratory nature of Machine Learning Practice, focusing on margins rather than generating artful insights!

It is to be noted that labeling is a service and the gross margin would not be as high as a typical SaaS, but rather low (30-50%) than software as a service (60-80%). Also, there are many aspects to labeling that individually seem to cost low, but summing up each of those costs over a large dataset, results in a hefty bill. While charging a new AI venture with a small budget, putting a high floor price is in itself misdirecting, because not every organization will not have a large dataset (unlabeled) specific to their endeavor. And it presents every onlooker with the question, \"is it absolutely necessary to have larger datasets (labeled) to train a machine learning model\"! So how do you explain a bill in hundreds of thousands of dollars? Not to mention the labeling service providers get to learn from the ongoing projects, which prepares them to handle a wide variety of data types in the future. So even though more reputed organizations can come up with large MOQ and consequent pricing, they may not be the right fit for you.

 

2. High Management costs: 

A lot of vendors will charge you with high 'project management costs. Repetitive management costs for the same kind of project do not exist. Once a unique project is processed by a team, the management workflow becomes built like a reusable product; so repetitive costs do not actually occur. On the flip side, lean companies (shameless plug - like Abelling!) have next-to-no-overheads; ensuring you don't end up paying for things unnecessarily.

In an article published by \"The New York Times'' back in 2019, employees (labelers) at iMerit, a leading company that provides data labeling services to big names in the tech industry worldwide were paid between 150 and 200 dollars a month - compared to ~800 to 1000 dollars that iMerit racks up in revenue. Although it is unclear how the salary scale was structured and the rest of the revenue was distributed, there was a significant difference between the salary paid and revenue generated. Of course, each labeling/annotation job needs to be explained to the labelers in detail and there are overhead costs over varying time periods - nonetheless, management costs do seem a bit high do not they? At Abelling, we usually try to address this issue and aim to reduce the financial burden on the client wherever possible.  

 

3. Number of operations: 

It is an interesting issue because from the top we talk about classification, bounding boxes, segmentation, etc., but there are underlying discrepancies that may confound the customer when subscribing to a labeling service. For example, a vehicle bounding box operation includes drawing a bounding box around the car, labeling it, and assigning related attributes with the condition that if the image area is less than a certain pixel, it will not be labeled. Therefore, depending on the annotator expertise and attention to the drawing of bounding box images can have some operations done on it, or discarded. It is probable that due to pressure the annotator deems most of the images not label worthy. The customer would still have to pay in full for a customized annotation service as the bounding boxes were drawn. 

 

4. Poor QA: 

Every image/label needs to be reviewed by unique experienced reviewers, sometimes in multiple rounds to achieve higher accuracy requirements. Some parties tend to circumvent the QA process to save costs and check only a subset of the data or avoid doing the reviews rigorously.

It has been observed that some data classes in an ontology of classes specific to a labeling effort are more sensitive to mislabeling. Consequently, it can impact the model accuracy, which is a function of the correctness of the labeling process. Simply put, some classes require more attention during labeling compared to their counterparts. This extra effort can be avoided by labeling service providers and if not, it will ensue as an additional charge on an ever-growing billing statement.

 

5. Billing Method: 

Often data sets contain large quantities of repetitive information or data that is not relevant to ML projects. In such cases, it is important to find out the data that are more meaningful when training a model. It has been established that the effect of mislabeled data is almost independent of any particular model. This opens up the opportunity to use the given data and train a general model to identify the data groups that have a high impact during the labeling phase. Thus, segmenting the dataset beforehand can be a standard operation for every labeling project and not required to have it separately billed every time. Labeling service providers can focus on such standardizations rather than use these operations to charge extra.

 

6. Revisions: 

How many times do you go over the same data? It is a question that has ML experts thinking for quite some time. Just having a large amount of data does not result in a better-performing model.

 


 

 

Fig: 1 (In the chart above, each dot per line represents an additional label)

 

Source: https://towardsdatascience.com/are-you-spending-too-much-money-labeling-data-70a712123df1

 

Labeling cost is directly associated with the number of revisions per data/record. Even though we want to be economically on the cheaper end of the cost spectrum, it is risky to depend on only one annotator. From the graph above it is pretty much clear that even one revision (two different annotators), outperforms the model which is trained on data labeled once. Therefore, it is futile to offer labeling services that have only one annotator doing the labeling and ask for a high starting price. This does not mean that charge would be the same for assigning two annotators for the job as shown in the graph. It should be about maintaining a balance between the cost and providing the customers correctly labeled dataset that would result in a meaningful outcome. Although this is not a widespread practice, Abelling strives to change that.

 

7. Circumventing special requirements: 

In the previous section, we discussed how more than one annotation per record increases the accuracy of an ML model. While on that note, we found that the labeling service providers do not specify whether there would be an expert among the annotators? For example, for a task of medical image classification, it can be expected that one of the annotators has a medical background and relevant knowledge of human anatomy. Startups like MD.ai hire radiologists and radiology technologists for the creation of annotated data sets for ML models. The same issue is faced in other domains, such as legal document annotation, and speech annotation where you have to deal with the innate subjectivity of the data. Unlike binary classifications, these are special cases where expert supervision becomes absolute to have a functioning model.

One other key issue with these data types is compliance. Both the outsourcer and the labeling service provider have to attain certification to prove their capacity for data handling. One of such certifications is HIPAA for personally-identifiable medical data. Such certifications are in place for ensuring data security. Data labeling service providers are not always clear about how the data will be prevented from copying, transferring, and worst sold elsewhere. Traditional NDAs do not discuss these matters down to the last detail and when as a customer you are paying a hefty bill, you would expect the service provider to have these questions answered before asking.  

 

8. Not religiously doing pilots: 

Just being handed the data does not mean that the labeling company will contact the customer at the end of the job and present a folder full of labeled data, no questions asked. The service provider and customer must communicate and select a portion of the dataset to be labeled as a pilot before signing a contract. Better if the pilot is a standard operation for identifying the data clusters that would have more impact on training the ML model. But in the current market, there are few providers that do the pilot seriously. It spells a disaster for low-budget customers who somehow could manage to outsource the project in the first place.

 

9. Not providing cost breakdowns: 

Up to now, all the points mentioned have been focused on how the cost of labeled data has become exorbitant. You would expect the cost to be presented for everyones view, but it is not. Below is a breakdown of labeling costs for Google data labeling service. It is not overly detailed but most of the other well-known labeling service providers do not even have this much put on their websites. There is a lack of standardization in determining the unit cost of every operation. Whether it is an institutional customer or a Ph.D. student, the pricing scheme needs to be transparent. Other than that there is no way to compare and understand whether you are paying excessive amounts for the labeling service.

 


 

Fig 2: Google AI Platform Data Labeling Service pricing

Source: https://cloud.google.com/ai-platform/data-labeling/pricing

As everyone is trying to incorporate AI in the workflow of their organizations, many of them would be willing to rush the outsourcing of their data to any of the numerous data labeling service providers opening shops in 3rd world countries. If the points discussed in the article would remain unresolved and a huge cost keeps middle and small-sized ventures away from quality service, not only there would be a loss of customers but also initiate a race toward the bottom from which even established service providers will not be exempted.
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Want to stay updated with the latest AI developments and blog posts aboutthe machine learning world?

Signup to our monthly newsletter!