Skip to content

Quality Management

At, we believe, quality of the machine learning model is entirely dependent on the quality of labeling for training data. We help data-scientists control quality of data labeling in following ways:

Annotator's Performance Management

  1. Measure, record and analyze annotator's performance on every individual task, asset and label.
  2. Compare performance of multiple annotators on same task.
  3. Distribute labeling work among multiple labelers and observe consensus among their work.
  4. Seed annotation tasks with golden data set. Report performance of annotator on golden data set.

Pixel Accurate tools

  1. We use advanced image processing to build pixel accurate tools like segmentation, freehand, growth tools.
  2. Build high quality user experience for annotators that allows then to do high quality work at fast pace.
  3. Enable smooth collaboration between annotators and data-scientists.

State of the art User Experience (UX)

  1. believes quality of work performed by annotators depends on quality of user experience in annotation tools
  2. We build intuitive user experience for annotators.

Labeling Instruction Builder

  1. Empowering data scientist to take control user experience of annotators.
  2. Labeling classes and ontology needs to be defined in detail.
  3. Data scientist needs precise view of how labeling interface appears to the annotator.