SYNDCAR: Synthetic Vehicle Damage Detection and Parts Segmentation Dataset

Published: 31 October 2025| Version 1 | DOI: 10.17632/hzpj48krdt.1
Contributors:
,
,
,
,
,
,
,
, Carlos Ferreira,
,

Description

SYNDCAR (SYNthetic Damage CAR dataset) is a synthetic dataset developed to support research on automated vehicle inspection using computer vision. It integrates two complementary tasks: vehicle damage detection and vehicle parts segmentation. (i) Damage Classes (4): broken glass, broken lights, cracks, and scratches. (ii) Vehicle Parts (28): left side window, right side window, left door, right door, right rear quarter panel, left rear quarter panel, right front quarter panel, left rear side window, right rear side window, left front side window, right front side window, roof, left rear wheel, right rear wheel, left front wheel, right front wheel, front windshield, rear windshield, left mirror, right mirror, license plate, rear lights, front lights, front panel, rear panel, left front quarter panel, left rocker panel, and right rocker panel. In total, SYNDCAR includes 245 high-resolution images, manually annotated and provided in YOLO Detection and Segmentation formats. Vehicle damages are annotated using bounding boxes, while vehicle parts are annotated with polygons. All images were captured using five different imaging devices, each featuring distinct sensors and resolutions. The filename of every image begins with an identifier (ID) corresponding to the device used for acquisition, ensuring full traceability across the dataset. The dataset is distributed as a ZIP archive organized into three main folders: /images/ - Contains all original images; /labels_damage/ - Includes YOLO-format annotations for vehicle damage; /labels_parts/ - Contains YOLO-format annotations for vehicle parts. /data_damage.yaml - Configuration file for YOLO detection with classes names /data_parts.yaml - Configuration file for YOLO segmentation with classes names /prepare_dataset.py - Python script to prepare datasets for segmentation and detection tasks in YOLO or COCO format. This dataset was produced by researchers from the Digital Transformation CoLAB (DTx CoLAB) and the University of Minho, Guimarães, Portugal.

Files

Institutions

Universidade do Minho, Universidade do Minho Centro ALGORITMI

Categories

Computer Vision, Object Detection, Urban Mobility, Deep Learning, Instance Segmentation

Funding

European Union under the NextGenerationEU, through a grant of the Portuguese Republic’s Recovery and Resilience Plan (PRR) Partnership Agreement, within the scope of the project BE.NEUTRAL – Agenda da Mobilidade para a neutralidade carbónica das cidades.

Project ref. nr. 35 - C644874240-00000016

Licence