Walking-Pose with Exoskeleton

Published: 28 July 2023| Version 1 | DOI: 10.17632/637s7mvsg7.1
, Chen Wang,


We provide an RGB-D dataset with fine annotation on the detection of joint points during human walking, including 3588 pairs of RGB images and corresponding depth images. These RGB-D images are captured from persons walking freely and wearing the exoskeleton in different scenes. The training set contains 2,870 pairs of RGB-D images, and the validation set contains 718 pairs of RGB-D images. We use Realsense D435i depth camera to collect images from actual scenes at Beihang University, and participants include six males and four females. These images are padded and resized to 512 x 512 to simplify the data loading process. Each annotation contains eight human joint points. The annotation form of the dataset is organized as follows: x1 y1 x2 y2 x3 y3 x4 y4 x5 y5 x6 y6 x7 y7 x8 y8 /n ... Where, x1 and y1 — left shoulder joint. x2 and y2 — right shoulder joint x3 and y3 — left hip joint x4 and y4 — right hip joint x5 and y5 — left knee joint x6 and y6 — right knee joint x7 and y7 — left ankle joint x8 and y8 — right ankle joint The label of an image is stored in a text file and associated by the file name.



Beihang University


Computer Vision, Gait, Convolutional Neural Network, Deep Learning, Exoskeleton