The token (or API key) must be passed as a request header. You can find your user token on the User Account page in Label Studio. Example: <br><pre><code class="language-bash">curl https://label-studio-host/api/projects -H "Authorization: Token [your-token]"</code></pre>
Request
This endpoint expects an object.
annotator_evaluation_enabledbooleanOptional
Enable annotator evaluation for the project
colorstring or nullOptional<=16 characters
control_weightsany or nullOptional
Dict of weights for each control tag in metric calculation. Each control tag (e.g. label or choice) will have it's own key in control weight dict with weight for each label and overall weight.For example, if bounding box annotation with control tag named my_bbox should be included with 0.33 weight in agreement calculation, and the first label Car should be twice more important than Airplaine, then you have to need the specify: {'my_bbox': {'type': 'RectangleLabels', 'labels': {'Car': 1.0, 'Airplaine': 0.5}, 'overall': 0.33}
created_byobjectOptional
Project owner
descriptionstring or nullOptional
Project description
enable_empty_annotationbooleanOptional
Allow annotators to submit empty annotations
evaluate_predictions_automaticallybooleanOptional
Retrieve and display predictions when loading a task
expert_instructionstring or nullOptional
Labeling instructions in HTML format
is_draftbooleanOptional
Whether or not the project is in the middle of being created
is_publishedbooleanOptional
Whether or not the project is published to annotators
label_configstring or nullOptional
Label config in XML format. See more about it in documentation
Maximum number of annotations for one task. If the number of annotations per task is equal or greater to this value, the task is completed (is_labeled=True)
* `Sequential sampling` - Tasks are ordered by Data manager ordering
* `Uniform sampling` - Tasks are chosen randomly
* `Uncertainty sampling` - Tasks are chosen according to model uncertainty scores (active learning mode)
Allowed values:
show_annotation_historybooleanOptional
Show annotation history to annotator
show_collab_predictionsbooleanOptional
If set, the annotator can view model predictions
show_instructionbooleanOptional
Show instructions to the annotator before they start
show_overlap_firstbooleanOptional
show_skip_buttonbooleanOptional
Show a skip button in interface and allow annotators to skip the task
skip_queueenum or nullOptional
* `REQUEUE_FOR_ME` - Requeue for me
* `REQUEUE_FOR_OTHERS` - Requeue for others
* `IGNORE_SKIPPED` - Ignore skipped
Allowed values:
task_data_loginstring or nullOptional<=256 characters
Task data credentials: login
task_data_passwordstring or nullOptional<=256 characters
Task data credentials: password
titlestring or nullOptional3-50 characters
Project name. Must be between 3 and 50 characters long.
workspaceintegerOptional
show_ground_truth_firstbooleanOptionalDeprecated
Onboarding mode (true): show ground truth tasks first in the labeling stream
Response
config_has_control_tagsboolean
Flag to detect is project ready for labeling
config_suitable_for_bulk_annotationboolean
Flag to detect is project ready for bulk annotation
created_atdatetime
finished_task_numberinteger
Finished tasks
ground_truth_numberinteger
Honeypot annotation number in project
idinteger
num_tasks_with_annotationsinteger
Tasks with annotations count
parsed_label_configany
JSON-formatted labeling configuration
queue_doneinteger
queue_totalinteger
skipped_annotations_numberinteger
Skipped by collaborators annotation number in project
start_training_on_annotation_updateboolean
Start model training after any annotations are submitted or updated
statestring
task_numberinteger
Total task number in project
total_annotations_numberinteger
Total annotations number in project including skipped_annotations_number and ground_truth_number.
total_predictions_numberinteger
Total predictions number in project including skipped_annotations_number, ground_truth_number, and useful_annotation_number.
useful_annotation_numberinteger
Useful annotation number in project not including skipped_annotations_number and ground_truth_number. Total annotations = annotation_number + skipped_annotations_number + ground_truth_number
annotator_evaluation_enabledboolean or null
Enable annotator evaluation for the project
colorstring or null<=16 characters
control_weightsany or null
Dict of weights for each control tag in metric calculation. Each control tag (e.g. label or choice) will have it's own key in control weight dict with weight for each label and overall weight.For example, if bounding box annotation with control tag named my_bbox should be included with 0.33 weight in agreement calculation, and the first label Car should be twice more important than Airplaine, then you have to need the specify: {'my_bbox': {'type': 'RectangleLabels', 'labels': {'Car': 1.0, 'Airplaine': 0.5}, 'overall': 0.33}
created_byobject or null
Project owner
descriptionstring or null
Project description
enable_empty_annotationboolean or null
Allow annotators to submit empty annotations
evaluate_predictions_automaticallyboolean or null
Retrieve and display predictions when loading a task
expert_instructionstring or null
Labeling instructions in HTML format
is_draftboolean or null
Whether or not the project is in the middle of being created
is_publishedboolean or null
Whether or not the project is published to annotators
label_configstring or null
Label config in XML format. See more about it in documentation
maximum_annotationsinteger or null-2147483648-2147483647
Maximum number of annotations for one task. If the number of annotations per task is equal or greater to this value, the task is completed (is_labeled=True)
min_annotations_to_start_traininginteger or null-2147483648-2147483647
Minimum number of completed tasks after which model training is started
model_versionstring or null
Machine learning model version
organizationinteger or null
overlap_cohort_percentageinteger or null-2147483648-2147483647
pinned_atdatetime or null
Pinned date and time
reveal_preannotations_interactivelyboolean or null
Reveal pre-annotations interactively
samplingenum or null
* `Sequential sampling` - Tasks are ordered by Data manager ordering
* `Uniform sampling` - Tasks are chosen randomly
* `Uncertainty sampling` - Tasks are chosen according to model uncertainty scores (active learning mode)
Allowed values:
show_annotation_historyboolean or null
Show annotation history to annotator
show_collab_predictionsboolean or null
If set, the annotator can view model predictions
show_instructionboolean or null
Show instructions to the annotator before they start
show_overlap_firstboolean or null
show_skip_buttonboolean or null
Show a skip button in interface and allow annotators to skip the task
skip_queueenum or null
* `REQUEUE_FOR_ME` - Requeue for me
* `REQUEUE_FOR_OTHERS` - Requeue for others
* `IGNORE_SKIPPED` - Ignore skipped
Allowed values:
task_data_loginstring or null<=256 characters
Task data credentials: login
task_data_passwordstring or null<=256 characters
Task data credentials: password
titlestring or null3-50 characters
Project name. Must be between 3 and 50 characters long.
workspaceinteger or null
show_ground_truth_firstboolean or nullDeprecated
Onboarding mode (true): show ground truth tasks first in the labeling stream
JSON-formatted labeling configuration
Total predictions number in project including skipped_annotations_number, ground_truth_number, and useful_annotation_number.
Useful annotation number in project not including skipped_annotations_number and ground_truth_number. Total annotations = annotation_number + skipped_annotations_number + ground_truth_number
Maximum number of annotations for one task. If the number of annotations per task is equal or greater to this value, the task is completed (is_labeled=True)
Reveal pre-annotations interactively
Sequential sampling - Tasks are ordered by Data manager ordering
Uniform sampling - Tasks are chosen randomly
Uncertainty sampling - Tasks are chosen according to model uncertainty scores (active learning mode)
REQUEUE_FOR_ME - Requeue for me
REQUEUE_FOR_OTHERS - Requeue for others
IGNORE_SKIPPED - Ignore skipped
Task data credentials: login
Maximum number of annotations for one task. If the number of annotations per task is equal or greater to this value, the task is completed (is_labeled=True)
Reveal pre-annotations interactively
Sequential sampling - Tasks are ordered by Data manager ordering
Uniform sampling - Tasks are chosen randomly
Uncertainty sampling - Tasks are chosen according to model uncertainty scores (active learning mode)
REQUEUE_FOR_ME - Requeue for me
REQUEUE_FOR_OTHERS - Requeue for others
IGNORE_SKIPPED - Ignore skipped
Task data credentials: login
Onboarding mode (true): show ground truth tasks first in the labeling stream
Onboarding mode (true): show ground truth tasks first in the labeling stream
The token (or API key) must be passed as a request header. You can find your user token on the User Account page in Label Studio. Example: