This dataset contains the visual features of over 1500 days of lifelogging, recorded by more than 50 independent subjects*. Due to privacy constraints, the original pictures are not provided. We release the dataset in two independent subsets in case it is to be used in conjunction with our Visual Context Predictor model. Such model is trained with data from R3training.h5, and has not seen any data from R3testing.h5. Each of the data files contains four hdf5 datasets: 'user_id', 'day' (consecutive recording days), 'frame_id' (HHMMSS), and 'descriptor' (the visual feature for that frame). The features are unnormalized, as extracted using the Keras model InceptionV3 with parameters include_top=False, pooling='max'. Note that the VCP model has been trained with data normalized with min-max normalization to be between 0 and 1. If you use this data or the model, please cite the following publication: Ana García del Molino, Joo-Hwee Lim, and Ah-Hwee Tan. 2018. Predicting Visual Context for Unsupervised Event Segmentation in Continuous Photo-streams. In Proceedings of ACM Multimedia conference (ACMMM’18). ACM, New York, NY, USA. https://doi.org/10.1145/3240508.3240624 (*) Due to storage space limitations the dataset had to be slightly reduced compared to the numbers reported on our paper "Predicting Visual Context for Unsupervised Event Segmentation in Continuous Photo-streams.". If you require other visual features for your research, please contact us.
Steps to reproduce
To use or fine-tune the model please follow the instructions from our Github page.