A dataset of egocentric and exocentric view hands in interactive senses
Description
The dataset provides data on the egocentric view (first-person view) and the exocentric view (third-person view). The dataset contains 47166 frame images from original videos captured by the iPhone. The data from egocentric and exocentric are recorded synchronously. Moreover, the data was acquired in the real world under natural light, white light, yellow light, and dim light. The dataset includes two, three, and four persons participating in interactive activities such as poker, checkers, and dice. The datasets contain various hand gestures, including extraordinary cases such as blur, severe deformation, sharp shadows, and extremely dim light. This dataset principally provides original data. Researchers can process the data for training, validation, and testing of supervised, semi-supervised, unsupervised and self-supervised deep learning models in static or real-time interaction scenarios for their research requirements.