GF-FRCNN MSCOCO

Published: 18 November 2024| Version 3 | DOI: 10.17632/sf238jg557.3
Contributor:

Description

Correction in Version 1: Each file contains 36 x 17 dimension features extracted from fixed set 36 bounding boxes (Anderson et al., 2018) of Faster R-CNN (Ren et al., 2015) for each image in the MSCOCO (Lin et al., 2014) Image Captioning Dataset. The sequence is: relative_x1, relative_y1, relative_x2, relative_y2, relative_widthb, relative_heightb, relative_area, aspect_ratio, relative_center_x, relative_center_y, relative_perimeter, relative_diagonal, relative_left_margin, relative_top_margin, relative_right_margin, relative_bottom_margin, relative_distance_to_center. Correction in Version 2: Wrong uploaded Version 3 : This version contains 17 features extracted from the adaptive set 10 to 100 bounding boxes (Anderson et al., 2018) of Faster R-CNN (Ren et al., 2015) for each image in the MSCOCO (Lin et al., 2014) Image Captioning Dataset. This dataset, containing all the possible geometric features extracted from 123,287 images in the MSCOCO image captioning dataset (Lin et al., 2014) , provides essential spatial information about each object. The sequence is: relative_x1, relative_y1, relative_x2, relative_y2, relative_widthb, relative_heightb, relative_area, aspect_ratio, relative_center_x, relative_center_y, relative_perimeter, relative_diagonal, relative_left_margin, relative_top_margin, relative_right_margin, relative_bottom_margin, relative_distance_to_center. References Ren, S., He, K., Girshick, R., Sun, J., 2015. Faster r-cnn: Towards realtime object detection with region proposal networks. Advances in neural information processing systems 28. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L., 2014. Microsoft coco: Common objects in context, in: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, Springer. pp. 740–755. Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., Zhang, L., 2018. Bottom-up and top-down attention for image captioning and visual question answering, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6077–6086.

Files

Institutions

University of Science and Technology of China

Categories

Object Detection, Features Detection, Image Analysis

Licence